00:00:00.001 Started by upstream project "autotest-per-patch" build number 126240 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.227 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.227 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.264 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.276 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.289 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.289 > git config core.sparsecheckout # timeout=10 00:00:04.300 > git read-tree -mu HEAD # timeout=10 00:00:04.319 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.344 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.344 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.458 [Pipeline] Start of Pipeline 00:00:04.471 [Pipeline] library 00:00:04.473 Loading library shm_lib@master 00:00:04.473 Library shm_lib@master is cached. Copying from home. 00:00:04.487 [Pipeline] node 00:00:04.501 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.502 [Pipeline] { 00:00:04.510 [Pipeline] catchError 00:00:04.511 [Pipeline] { 00:00:04.521 [Pipeline] wrap 00:00:04.529 [Pipeline] { 00:00:04.536 [Pipeline] stage 00:00:04.537 [Pipeline] { (Prologue) 00:00:04.708 [Pipeline] sh 00:00:04.985 + logger -p user.info -t JENKINS-CI 00:00:05.000 [Pipeline] echo 00:00:05.001 Node: WFP8 00:00:05.006 [Pipeline] sh 00:00:05.302 [Pipeline] setCustomBuildProperty 00:00:05.313 [Pipeline] echo 00:00:05.315 Cleanup processes 00:00:05.320 [Pipeline] sh 00:00:05.600 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.600 3403886 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.611 [Pipeline] sh 00:00:05.889 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.889 ++ grep -v 'sudo pgrep' 00:00:05.889 ++ awk '{print $1}' 00:00:05.889 + sudo kill -9 00:00:05.889 + true 00:00:05.902 [Pipeline] cleanWs 00:00:05.909 [WS-CLEANUP] Deleting project workspace... 00:00:05.909 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.914 [WS-CLEANUP] done 00:00:05.918 [Pipeline] setCustomBuildProperty 00:00:05.928 [Pipeline] sh 00:00:06.207 + sudo git config --global --replace-all safe.directory '*' 00:00:06.271 [Pipeline] httpRequest 00:00:06.298 [Pipeline] echo 00:00:06.299 Sorcerer 10.211.164.101 is alive 00:00:06.305 [Pipeline] httpRequest 00:00:06.308 HttpMethod: GET 00:00:06.309 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.309 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.312 Response Code: HTTP/1.1 200 OK 00:00:06.312 Success: Status code 200 is in the accepted range: 200,404 00:00:06.313 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.444 [Pipeline] sh 00:00:07.723 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.735 [Pipeline] httpRequest 00:00:07.766 [Pipeline] echo 00:00:07.767 Sorcerer 10.211.164.101 is alive 00:00:07.775 [Pipeline] httpRequest 00:00:07.779 HttpMethod: GET 00:00:07.780 URL: http://10.211.164.101/packages/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:07.780 Sending request to url: http://10.211.164.101/packages/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:00:07.798 Response Code: HTTP/1.1 200 OK 00:00:07.798 Success: Status code 200 is in the accepted range: 200,404 00:00:07.798 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:01:04.279 [Pipeline] sh 00:01:04.563 + tar --no-same-owner -xf spdk_91f51bb85b72987c3fe5a26dd93f03d462502d97.tar.gz 00:01:07.113 [Pipeline] sh 00:01:07.396 + git -C spdk log --oneline -n5 00:01:07.396 91f51bb85 nvme: populate socket_id for pcie controllers 00:01:07.396 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:01:07.396 b26ca8289 event: add enforce_numa app option 00:01:07.396 83c8cffdc env: add enforce_numa environment option 00:01:07.396 804b11b4b env_dpdk: assert that SOCKET_ID_ANY == SPDK_ENV_SOCKET_ID_ANY 00:01:07.409 [Pipeline] } 00:01:07.427 [Pipeline] // stage 00:01:07.437 [Pipeline] stage 00:01:07.440 [Pipeline] { (Prepare) 00:01:07.461 [Pipeline] writeFile 00:01:07.479 [Pipeline] sh 00:01:07.765 + logger -p user.info -t JENKINS-CI 00:01:07.777 [Pipeline] sh 00:01:08.059 + logger -p user.info -t JENKINS-CI 00:01:08.075 [Pipeline] sh 00:01:08.356 + cat autorun-spdk.conf 00:01:08.356 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.356 SPDK_TEST_NVMF=1 00:01:08.356 SPDK_TEST_NVME_CLI=1 00:01:08.356 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.356 SPDK_TEST_NVMF_NICS=e810 00:01:08.356 SPDK_TEST_VFIOUSER=1 00:01:08.356 SPDK_RUN_UBSAN=1 00:01:08.356 NET_TYPE=phy 00:01:08.364 RUN_NIGHTLY=0 00:01:08.369 [Pipeline] readFile 00:01:08.401 [Pipeline] withEnv 00:01:08.403 [Pipeline] { 00:01:08.415 [Pipeline] sh 00:01:08.695 + set -ex 00:01:08.695 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:08.695 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.695 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.695 ++ SPDK_TEST_NVMF=1 00:01:08.695 ++ SPDK_TEST_NVME_CLI=1 00:01:08.695 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.695 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.695 ++ SPDK_TEST_VFIOUSER=1 00:01:08.695 ++ SPDK_RUN_UBSAN=1 00:01:08.695 ++ NET_TYPE=phy 00:01:08.695 ++ RUN_NIGHTLY=0 00:01:08.695 + case $SPDK_TEST_NVMF_NICS in 00:01:08.695 + DRIVERS=ice 00:01:08.695 + [[ tcp == \r\d\m\a ]] 00:01:08.695 + [[ -n ice ]] 00:01:08.695 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.695 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.695 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:08.695 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.695 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.695 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.695 + true 00:01:08.695 + for D in $DRIVERS 00:01:08.695 + sudo modprobe ice 00:01:08.695 + exit 0 00:01:08.704 [Pipeline] } 00:01:08.723 [Pipeline] // withEnv 00:01:08.729 [Pipeline] } 00:01:08.746 [Pipeline] // stage 00:01:08.753 [Pipeline] catchError 00:01:08.755 [Pipeline] { 00:01:08.766 [Pipeline] timeout 00:01:08.766 Timeout set to expire in 50 min 00:01:08.768 [Pipeline] { 00:01:08.784 [Pipeline] stage 00:01:08.786 [Pipeline] { (Tests) 00:01:08.804 [Pipeline] sh 00:01:09.087 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.087 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.087 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.087 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.087 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.087 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.087 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.087 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.087 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.087 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.087 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:09.087 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.087 + source /etc/os-release 00:01:09.087 ++ NAME='Fedora Linux' 00:01:09.087 ++ VERSION='38 (Cloud Edition)' 00:01:09.087 ++ ID=fedora 00:01:09.087 ++ VERSION_ID=38 00:01:09.087 ++ VERSION_CODENAME= 00:01:09.087 ++ PLATFORM_ID=platform:f38 00:01:09.087 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.087 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.087 ++ LOGO=fedora-logo-icon 00:01:09.087 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.087 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.087 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.087 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.087 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.087 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.087 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.087 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.087 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.087 ++ SUPPORT_END=2024-05-14 00:01:09.087 ++ VARIANT='Cloud Edition' 00:01:09.087 ++ VARIANT_ID=cloud 00:01:09.087 + uname -a 00:01:09.087 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.087 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.021 Hugepages 00:01:11.021 node hugesize free / total 00:01:11.021 node0 1048576kB 0 / 0 00:01:11.021 node0 2048kB 0 / 0 00:01:11.021 node1 1048576kB 0 / 0 00:01:11.021 node1 2048kB 0 / 0 00:01:11.021 00:01:11.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.021 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:11.021 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:11.021 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:11.021 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:11.021 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:11.021 + rm -f /tmp/spdk-ld-path 00:01:11.021 + source autorun-spdk.conf 00:01:11.021 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.021 ++ SPDK_TEST_NVMF=1 00:01:11.021 ++ SPDK_TEST_NVME_CLI=1 00:01:11.021 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.021 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.021 ++ SPDK_TEST_VFIOUSER=1 00:01:11.021 ++ SPDK_RUN_UBSAN=1 00:01:11.021 ++ NET_TYPE=phy 00:01:11.021 ++ RUN_NIGHTLY=0 00:01:11.021 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.021 + [[ -n '' ]] 00:01:11.021 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.021 + for M in /var/spdk/build-*-manifest.txt 00:01:11.021 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.021 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.022 + for M in /var/spdk/build-*-manifest.txt 00:01:11.022 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.022 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.022 ++ uname 00:01:11.022 + [[ Linux == \L\i\n\u\x ]] 00:01:11.022 + sudo dmesg -T 00:01:11.022 + sudo dmesg --clear 00:01:11.281 + dmesg_pid=3404929 00:01:11.281 + [[ Fedora Linux == FreeBSD ]] 00:01:11.281 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.281 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.281 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.281 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.281 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.281 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.281 + sudo dmesg -Tw 00:01:11.281 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.281 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.281 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.281 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.281 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.281 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.281 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.281 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.281 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.281 Test configuration: 00:01:11.281 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.281 SPDK_TEST_NVMF=1 00:01:11.281 SPDK_TEST_NVME_CLI=1 00:01:11.281 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.281 SPDK_TEST_NVMF_NICS=e810 00:01:11.281 SPDK_TEST_VFIOUSER=1 00:01:11.281 SPDK_RUN_UBSAN=1 00:01:11.281 NET_TYPE=phy 00:01:11.281 RUN_NIGHTLY=0 21:39:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.281 21:39:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.281 21:39:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.281 21:39:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.281 21:39:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.281 21:39:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.281 21:39:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.281 21:39:05 -- paths/export.sh@5 -- $ export PATH 00:01:11.281 21:39:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.281 21:39:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.281 21:39:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:11.281 21:39:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721072345.XXXXXX 00:01:11.281 21:39:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721072345.qkO9gv 00:01:11.281 21:39:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:11.281 21:39:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:11.281 21:39:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:11.281 21:39:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.281 21:39:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.281 21:39:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:11.281 21:39:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:11.281 21:39:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.281 21:39:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.281 21:39:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:11.281 21:39:05 -- pm/common@17 -- $ local monitor 00:01:11.281 21:39:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.281 21:39:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.281 21:39:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.281 21:39:05 -- pm/common@21 -- $ date +%s 00:01:11.281 21:39:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.281 21:39:05 -- pm/common@21 -- $ date +%s 00:01:11.281 21:39:05 -- pm/common@25 -- $ sleep 1 00:01:11.281 21:39:05 -- pm/common@21 -- $ date +%s 00:01:11.281 21:39:05 -- pm/common@21 -- $ date +%s 00:01:11.281 21:39:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721072345 00:01:11.281 21:39:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721072345 00:01:11.281 21:39:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721072345 00:01:11.281 21:39:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721072345 00:01:11.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721072345_collect-vmstat.pm.log 00:01:11.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721072345_collect-cpu-temp.pm.log 00:01:11.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721072345_collect-cpu-load.pm.log 00:01:11.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721072345_collect-bmc-pm.bmc.pm.log 00:01:12.218 21:39:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:12.218 21:39:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.218 21:39:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.218 21:39:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.218 21:39:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.218 Mon Jul 15 07:39:06 PM UTC 2024 00:01:12.218 21:39:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.218 v24.09-pre-231-g91f51bb85 00:01:12.218 21:39:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.218 21:39:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.218 21:39:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.218 21:39:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:12.218 21:39:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.218 21:39:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.477 ************************************ 00:01:12.477 START TEST ubsan 00:01:12.477 ************************************ 00:01:12.477 21:39:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:12.477 using ubsan 00:01:12.477 00:01:12.477 real 0m0.001s 00:01:12.477 user 0m0.000s 00:01:12.477 sys 0m0.000s 00:01:12.477 21:39:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:12.477 21:39:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.478 ************************************ 00:01:12.478 END TEST ubsan 00:01:12.478 ************************************ 00:01:12.478 21:39:06 -- common/autotest_common.sh@1142 -- $ return 0 00:01:12.478 21:39:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.478 21:39:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.478 21:39:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:12.478 21:39:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:12.478 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:12.478 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:12.737 Using 'verbs' RDMA provider 00:01:25.883 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:35.865 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.866 Creating mk/config.mk...done. 00:01:35.866 Creating mk/cc.flags.mk...done. 00:01:35.866 Type 'make' to build. 00:01:35.866 21:39:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:35.866 21:39:29 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:35.866 21:39:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:35.866 21:39:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.866 ************************************ 00:01:35.866 START TEST make 00:01:35.866 ************************************ 00:01:35.866 21:39:29 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:35.866 make[1]: Nothing to be done for 'all'. 00:01:37.248 The Meson build system 00:01:37.248 Version: 1.3.1 00:01:37.248 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.248 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.248 Build type: native build 00:01:37.248 Project name: libvfio-user 00:01:37.248 Project version: 0.0.1 00:01:37.248 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.248 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.248 Host machine cpu family: x86_64 00:01:37.248 Host machine cpu: x86_64 00:01:37.248 Run-time dependency threads found: YES 00:01:37.248 Library dl found: YES 00:01:37.248 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.248 Run-time dependency json-c found: YES 0.17 00:01:37.248 Run-time dependency cmocka found: YES 1.1.7 00:01:37.248 Program pytest-3 found: NO 00:01:37.248 Program flake8 found: NO 00:01:37.248 Program misspell-fixer found: NO 00:01:37.248 Program restructuredtext-lint found: NO 00:01:37.248 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.248 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.248 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.248 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.248 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.248 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.248 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.249 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.249 Build targets in project: 8 00:01:37.249 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.249 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.249 00:01:37.249 libvfio-user 0.0.1 00:01:37.249 00:01:37.249 User defined options 00:01:37.249 buildtype : debug 00:01:37.249 default_library: shared 00:01:37.249 libdir : /usr/local/lib 00:01:37.249 00:01:37.249 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.816 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.816 [1/37] Compiling C object samples/null.p/null.c.o 00:01:37.816 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.816 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.816 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.816 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.816 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.816 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.816 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.816 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.816 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.816 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.816 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.816 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.816 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.816 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.816 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.816 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.816 [18/37] Compiling C object samples/server.p/server.c.o 00:01:37.816 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.816 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.816 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.816 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.816 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.816 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.816 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.816 [26/37] Compiling C object samples/client.p/client.c.o 00:01:37.816 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:38.074 [28/37] Linking target samples/client 00:01:38.074 [29/37] Linking target test/unit_tests 00:01:38.074 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:38.074 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:38.074 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:38.075 [33/37] Linking target samples/gpio-pci-idio-16 00:01:38.075 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:38.075 [35/37] Linking target samples/lspci 00:01:38.075 [36/37] Linking target samples/null 00:01:38.075 [37/37] Linking target samples/server 00:01:38.333 INFO: autodetecting backend as ninja 00:01:38.333 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.333 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.593 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.593 ninja: no work to do. 00:01:43.898 The Meson build system 00:01:43.898 Version: 1.3.1 00:01:43.898 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.898 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.898 Build type: native build 00:01:43.898 Program cat found: YES (/usr/bin/cat) 00:01:43.898 Project name: DPDK 00:01:43.898 Project version: 24.03.0 00:01:43.898 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.898 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.898 Host machine cpu family: x86_64 00:01:43.898 Host machine cpu: x86_64 00:01:43.898 Message: ## Building in Developer Mode ## 00:01:43.898 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.898 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.898 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.898 Program python3 found: YES (/usr/bin/python3) 00:01:43.898 Program cat found: YES (/usr/bin/cat) 00:01:43.898 Compiler for C supports arguments -march=native: YES 00:01:43.898 Checking for size of "void *" : 8 00:01:43.898 Checking for size of "void *" : 8 (cached) 00:01:43.898 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:43.898 Library m found: YES 00:01:43.898 Library numa found: YES 00:01:43.898 Has header "numaif.h" : YES 00:01:43.898 Library fdt found: NO 00:01:43.898 Library execinfo found: NO 00:01:43.898 Has header "execinfo.h" : YES 00:01:43.898 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.898 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.898 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.898 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.898 Run-time dependency openssl found: YES 3.0.9 00:01:43.898 Run-time dependency libpcap found: YES 1.10.4 00:01:43.898 Has header "pcap.h" with dependency libpcap: YES 00:01:43.898 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.898 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.898 Compiler for C supports arguments -Wformat: YES 00:01:43.898 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.898 Compiler for C supports arguments -Wformat-security: NO 00:01:43.898 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.898 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.898 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.898 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.898 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.898 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.898 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.898 Compiler for C supports arguments -Wundef: YES 00:01:43.898 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.898 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.898 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.898 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.898 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.898 Program objdump found: YES (/usr/bin/objdump) 00:01:43.898 Compiler for C supports arguments -mavx512f: YES 00:01:43.898 Checking if "AVX512 checking" compiles: YES 00:01:43.898 Fetching value of define "__SSE4_2__" : 1 00:01:43.898 Fetching value of define "__AES__" : 1 00:01:43.898 Fetching value of define "__AVX__" : 1 00:01:43.898 Fetching value of define "__AVX2__" : 1 00:01:43.898 Fetching value of define "__AVX512BW__" : 1 00:01:43.898 Fetching value of define "__AVX512CD__" : 1 00:01:43.898 Fetching value of define "__AVX512DQ__" : 1 00:01:43.898 Fetching value of define "__AVX512F__" : 1 00:01:43.898 Fetching value of define "__AVX512VL__" : 1 00:01:43.898 Fetching value of define "__PCLMUL__" : 1 00:01:43.898 Fetching value of define "__RDRND__" : 1 00:01:43.898 Fetching value of define "__RDSEED__" : 1 00:01:43.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.898 Fetching value of define "__znver1__" : (undefined) 00:01:43.898 Fetching value of define "__znver2__" : (undefined) 00:01:43.898 Fetching value of define "__znver3__" : (undefined) 00:01:43.898 Fetching value of define "__znver4__" : (undefined) 00:01:43.898 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.898 Message: lib/log: Defining dependency "log" 00:01:43.898 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.898 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.898 Checking for function "getentropy" : NO 00:01:43.898 Message: lib/eal: Defining dependency "eal" 00:01:43.898 Message: lib/ring: Defining dependency "ring" 00:01:43.898 Message: lib/rcu: Defining dependency "rcu" 00:01:43.898 Message: lib/mempool: Defining dependency "mempool" 00:01:43.898 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.898 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.898 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:43.898 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:43.898 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:43.898 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:43.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:43.898 Compiler for C supports arguments -mpclmul: YES 00:01:43.898 Compiler for C supports arguments -maes: YES 00:01:43.898 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.898 Compiler for C supports arguments -mavx512bw: YES 00:01:43.898 Compiler for C supports arguments -mavx512dq: YES 00:01:43.898 Compiler for C supports arguments -mavx512vl: YES 00:01:43.898 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.898 Compiler for C supports arguments -mavx2: YES 00:01:43.898 Compiler for C supports arguments -mavx: YES 00:01:43.898 Message: lib/net: Defining dependency "net" 00:01:43.898 Message: lib/meter: Defining dependency "meter" 00:01:43.898 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.898 Message: lib/pci: Defining dependency "pci" 00:01:43.898 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.898 Message: lib/hash: Defining dependency "hash" 00:01:43.898 Message: lib/timer: Defining dependency "timer" 00:01:43.898 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.898 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.898 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.898 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.898 Message: lib/power: Defining dependency "power" 00:01:43.898 Message: lib/reorder: Defining dependency "reorder" 00:01:43.898 Message: lib/security: Defining dependency "security" 00:01:43.899 Has header "linux/userfaultfd.h" : YES 00:01:43.899 Has header "linux/vduse.h" : YES 00:01:43.899 Message: lib/vhost: Defining dependency "vhost" 00:01:43.899 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.899 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.899 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.899 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.899 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.899 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.899 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.899 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.899 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.899 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.899 Program doxygen found: YES (/usr/bin/doxygen) 00:01:43.899 Configuring doxy-api-html.conf using configuration 00:01:43.899 Configuring doxy-api-man.conf using configuration 00:01:43.899 Program mandb found: YES (/usr/bin/mandb) 00:01:43.899 Program sphinx-build found: NO 00:01:43.899 Configuring rte_build_config.h using configuration 00:01:43.899 Message: 00:01:43.899 ================= 00:01:43.899 Applications Enabled 00:01:43.899 ================= 00:01:43.899 00:01:43.899 apps: 00:01:43.899 00:01:43.899 00:01:43.899 Message: 00:01:43.899 ================= 00:01:43.899 Libraries Enabled 00:01:43.899 ================= 00:01:43.899 00:01:43.899 libs: 00:01:43.899 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.899 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.899 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.899 00:01:43.899 Message: 00:01:43.899 =============== 00:01:43.899 Drivers Enabled 00:01:43.899 =============== 00:01:43.899 00:01:43.899 common: 00:01:43.899 00:01:43.899 bus: 00:01:43.899 pci, vdev, 00:01:43.899 mempool: 00:01:43.899 ring, 00:01:43.899 dma: 00:01:43.899 00:01:43.899 net: 00:01:43.899 00:01:43.899 crypto: 00:01:43.899 00:01:43.899 compress: 00:01:43.899 00:01:43.899 vdpa: 00:01:43.899 00:01:43.899 00:01:43.899 Message: 00:01:43.899 ================= 00:01:43.899 Content Skipped 00:01:43.899 ================= 00:01:43.899 00:01:43.899 apps: 00:01:43.899 dumpcap: explicitly disabled via build config 00:01:43.899 graph: explicitly disabled via build config 00:01:43.899 pdump: explicitly disabled via build config 00:01:43.899 proc-info: explicitly disabled via build config 00:01:43.899 test-acl: explicitly disabled via build config 00:01:43.899 test-bbdev: explicitly disabled via build config 00:01:43.899 test-cmdline: explicitly disabled via build config 00:01:43.899 test-compress-perf: explicitly disabled via build config 00:01:43.899 test-crypto-perf: explicitly disabled via build config 00:01:43.899 test-dma-perf: explicitly disabled via build config 00:01:43.899 test-eventdev: explicitly disabled via build config 00:01:43.899 test-fib: explicitly disabled via build config 00:01:43.899 test-flow-perf: explicitly disabled via build config 00:01:43.899 test-gpudev: explicitly disabled via build config 00:01:43.899 test-mldev: explicitly disabled via build config 00:01:43.899 test-pipeline: explicitly disabled via build config 00:01:43.899 test-pmd: explicitly disabled via build config 00:01:43.899 test-regex: explicitly disabled via build config 00:01:43.899 test-sad: explicitly disabled via build config 00:01:43.899 test-security-perf: explicitly disabled via build config 00:01:43.899 00:01:43.899 libs: 00:01:43.899 argparse: explicitly disabled via build config 00:01:43.899 metrics: explicitly disabled via build config 00:01:43.899 acl: explicitly disabled via build config 00:01:43.899 bbdev: explicitly disabled via build config 00:01:43.899 bitratestats: explicitly disabled via build config 00:01:43.899 bpf: explicitly disabled via build config 00:01:43.899 cfgfile: explicitly disabled via build config 00:01:43.899 distributor: explicitly disabled via build config 00:01:43.899 efd: explicitly disabled via build config 00:01:43.899 eventdev: explicitly disabled via build config 00:01:43.899 dispatcher: explicitly disabled via build config 00:01:43.899 gpudev: explicitly disabled via build config 00:01:43.899 gro: explicitly disabled via build config 00:01:43.899 gso: explicitly disabled via build config 00:01:43.899 ip_frag: explicitly disabled via build config 00:01:43.899 jobstats: explicitly disabled via build config 00:01:43.899 latencystats: explicitly disabled via build config 00:01:43.899 lpm: explicitly disabled via build config 00:01:43.899 member: explicitly disabled via build config 00:01:43.899 pcapng: explicitly disabled via build config 00:01:43.899 rawdev: explicitly disabled via build config 00:01:43.899 regexdev: explicitly disabled via build config 00:01:43.899 mldev: explicitly disabled via build config 00:01:43.899 rib: explicitly disabled via build config 00:01:43.899 sched: explicitly disabled via build config 00:01:43.899 stack: explicitly disabled via build config 00:01:43.899 ipsec: explicitly disabled via build config 00:01:43.899 pdcp: explicitly disabled via build config 00:01:43.899 fib: explicitly disabled via build config 00:01:43.899 port: explicitly disabled via build config 00:01:43.899 pdump: explicitly disabled via build config 00:01:43.899 table: explicitly disabled via build config 00:01:43.899 pipeline: explicitly disabled via build config 00:01:43.899 graph: explicitly disabled via build config 00:01:43.899 node: explicitly disabled via build config 00:01:43.899 00:01:43.899 drivers: 00:01:43.899 common/cpt: not in enabled drivers build config 00:01:43.899 common/dpaax: not in enabled drivers build config 00:01:43.899 common/iavf: not in enabled drivers build config 00:01:43.899 common/idpf: not in enabled drivers build config 00:01:43.899 common/ionic: not in enabled drivers build config 00:01:43.899 common/mvep: not in enabled drivers build config 00:01:43.899 common/octeontx: not in enabled drivers build config 00:01:43.899 bus/auxiliary: not in enabled drivers build config 00:01:43.899 bus/cdx: not in enabled drivers build config 00:01:43.899 bus/dpaa: not in enabled drivers build config 00:01:43.899 bus/fslmc: not in enabled drivers build config 00:01:43.899 bus/ifpga: not in enabled drivers build config 00:01:43.899 bus/platform: not in enabled drivers build config 00:01:43.899 bus/uacce: not in enabled drivers build config 00:01:43.899 bus/vmbus: not in enabled drivers build config 00:01:43.899 common/cnxk: not in enabled drivers build config 00:01:43.899 common/mlx5: not in enabled drivers build config 00:01:43.899 common/nfp: not in enabled drivers build config 00:01:43.899 common/nitrox: not in enabled drivers build config 00:01:43.899 common/qat: not in enabled drivers build config 00:01:43.899 common/sfc_efx: not in enabled drivers build config 00:01:43.899 mempool/bucket: not in enabled drivers build config 00:01:43.899 mempool/cnxk: not in enabled drivers build config 00:01:43.899 mempool/dpaa: not in enabled drivers build config 00:01:43.899 mempool/dpaa2: not in enabled drivers build config 00:01:43.899 mempool/octeontx: not in enabled drivers build config 00:01:43.899 mempool/stack: not in enabled drivers build config 00:01:43.899 dma/cnxk: not in enabled drivers build config 00:01:43.899 dma/dpaa: not in enabled drivers build config 00:01:43.899 dma/dpaa2: not in enabled drivers build config 00:01:43.899 dma/hisilicon: not in enabled drivers build config 00:01:43.899 dma/idxd: not in enabled drivers build config 00:01:43.899 dma/ioat: not in enabled drivers build config 00:01:43.899 dma/skeleton: not in enabled drivers build config 00:01:43.899 net/af_packet: not in enabled drivers build config 00:01:43.899 net/af_xdp: not in enabled drivers build config 00:01:43.899 net/ark: not in enabled drivers build config 00:01:43.899 net/atlantic: not in enabled drivers build config 00:01:43.899 net/avp: not in enabled drivers build config 00:01:43.899 net/axgbe: not in enabled drivers build config 00:01:43.899 net/bnx2x: not in enabled drivers build config 00:01:43.899 net/bnxt: not in enabled drivers build config 00:01:43.899 net/bonding: not in enabled drivers build config 00:01:43.899 net/cnxk: not in enabled drivers build config 00:01:43.899 net/cpfl: not in enabled drivers build config 00:01:43.899 net/cxgbe: not in enabled drivers build config 00:01:43.899 net/dpaa: not in enabled drivers build config 00:01:43.899 net/dpaa2: not in enabled drivers build config 00:01:43.899 net/e1000: not in enabled drivers build config 00:01:43.899 net/ena: not in enabled drivers build config 00:01:43.899 net/enetc: not in enabled drivers build config 00:01:43.899 net/enetfec: not in enabled drivers build config 00:01:43.899 net/enic: not in enabled drivers build config 00:01:43.899 net/failsafe: not in enabled drivers build config 00:01:43.899 net/fm10k: not in enabled drivers build config 00:01:43.899 net/gve: not in enabled drivers build config 00:01:43.899 net/hinic: not in enabled drivers build config 00:01:43.899 net/hns3: not in enabled drivers build config 00:01:43.899 net/i40e: not in enabled drivers build config 00:01:43.899 net/iavf: not in enabled drivers build config 00:01:43.899 net/ice: not in enabled drivers build config 00:01:43.899 net/idpf: not in enabled drivers build config 00:01:43.899 net/igc: not in enabled drivers build config 00:01:43.899 net/ionic: not in enabled drivers build config 00:01:43.899 net/ipn3ke: not in enabled drivers build config 00:01:43.899 net/ixgbe: not in enabled drivers build config 00:01:43.899 net/mana: not in enabled drivers build config 00:01:43.899 net/memif: not in enabled drivers build config 00:01:43.899 net/mlx4: not in enabled drivers build config 00:01:43.899 net/mlx5: not in enabled drivers build config 00:01:43.899 net/mvneta: not in enabled drivers build config 00:01:43.899 net/mvpp2: not in enabled drivers build config 00:01:43.899 net/netvsc: not in enabled drivers build config 00:01:43.899 net/nfb: not in enabled drivers build config 00:01:43.899 net/nfp: not in enabled drivers build config 00:01:43.899 net/ngbe: not in enabled drivers build config 00:01:43.899 net/null: not in enabled drivers build config 00:01:43.899 net/octeontx: not in enabled drivers build config 00:01:43.899 net/octeon_ep: not in enabled drivers build config 00:01:43.899 net/pcap: not in enabled drivers build config 00:01:43.899 net/pfe: not in enabled drivers build config 00:01:43.899 net/qede: not in enabled drivers build config 00:01:43.899 net/ring: not in enabled drivers build config 00:01:43.899 net/sfc: not in enabled drivers build config 00:01:43.899 net/softnic: not in enabled drivers build config 00:01:43.899 net/tap: not in enabled drivers build config 00:01:43.899 net/thunderx: not in enabled drivers build config 00:01:43.899 net/txgbe: not in enabled drivers build config 00:01:43.900 net/vdev_netvsc: not in enabled drivers build config 00:01:43.900 net/vhost: not in enabled drivers build config 00:01:43.900 net/virtio: not in enabled drivers build config 00:01:43.900 net/vmxnet3: not in enabled drivers build config 00:01:43.900 raw/*: missing internal dependency, "rawdev" 00:01:43.900 crypto/armv8: not in enabled drivers build config 00:01:43.900 crypto/bcmfs: not in enabled drivers build config 00:01:43.900 crypto/caam_jr: not in enabled drivers build config 00:01:43.900 crypto/ccp: not in enabled drivers build config 00:01:43.900 crypto/cnxk: not in enabled drivers build config 00:01:43.900 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.900 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.900 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.900 crypto/mlx5: not in enabled drivers build config 00:01:43.900 crypto/mvsam: not in enabled drivers build config 00:01:43.900 crypto/nitrox: not in enabled drivers build config 00:01:43.900 crypto/null: not in enabled drivers build config 00:01:43.900 crypto/octeontx: not in enabled drivers build config 00:01:43.900 crypto/openssl: not in enabled drivers build config 00:01:43.900 crypto/scheduler: not in enabled drivers build config 00:01:43.900 crypto/uadk: not in enabled drivers build config 00:01:43.900 crypto/virtio: not in enabled drivers build config 00:01:43.900 compress/isal: not in enabled drivers build config 00:01:43.900 compress/mlx5: not in enabled drivers build config 00:01:43.900 compress/nitrox: not in enabled drivers build config 00:01:43.900 compress/octeontx: not in enabled drivers build config 00:01:43.900 compress/zlib: not in enabled drivers build config 00:01:43.900 regex/*: missing internal dependency, "regexdev" 00:01:43.900 ml/*: missing internal dependency, "mldev" 00:01:43.900 vdpa/ifc: not in enabled drivers build config 00:01:43.900 vdpa/mlx5: not in enabled drivers build config 00:01:43.900 vdpa/nfp: not in enabled drivers build config 00:01:43.900 vdpa/sfc: not in enabled drivers build config 00:01:43.900 event/*: missing internal dependency, "eventdev" 00:01:43.900 baseband/*: missing internal dependency, "bbdev" 00:01:43.900 gpu/*: missing internal dependency, "gpudev" 00:01:43.900 00:01:43.900 00:01:43.900 Build targets in project: 85 00:01:43.900 00:01:43.900 DPDK 24.03.0 00:01:43.900 00:01:43.900 User defined options 00:01:43.900 buildtype : debug 00:01:43.900 default_library : shared 00:01:43.900 libdir : lib 00:01:43.900 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.900 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.900 c_link_args : 00:01:43.900 cpu_instruction_set: native 00:01:43.900 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:43.900 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:43.900 enable_docs : false 00:01:43.900 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.900 enable_kmods : false 00:01:43.900 max_lcores : 128 00:01:43.900 tests : false 00:01:43.900 00:01:43.900 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.167 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:44.167 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.167 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.167 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.167 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.167 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.167 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.167 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.167 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.167 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.167 [10/268] Linking static target lib/librte_kvargs.a 00:01:44.167 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.167 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.167 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.167 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.167 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.167 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.427 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.427 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.427 [19/268] Linking static target lib/librte_log.a 00:01:44.427 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.427 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.427 [22/268] Linking static target lib/librte_pci.a 00:01:44.427 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.427 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.427 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.686 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.686 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.686 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.686 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.686 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.686 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.686 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.686 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.686 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.686 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.686 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.686 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.686 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.686 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.686 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.686 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.686 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.686 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.686 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.686 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.686 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.686 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.686 [48/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.686 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.686 [50/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.686 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.686 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.686 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.686 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.686 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.686 [56/268] Linking static target lib/librte_meter.a 00:01:44.686 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.686 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.686 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.686 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.686 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.686 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.686 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.686 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.686 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.686 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.686 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.686 [68/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.686 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.686 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.686 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.686 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.686 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.686 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.686 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.686 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.686 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.686 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.686 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.686 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.686 [81/268] Linking static target lib/librte_ring.a 00:01:44.686 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.686 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.686 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.686 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.686 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.686 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.686 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.686 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.686 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.686 [91/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.686 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.944 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.944 [94/268] Linking static target lib/librte_telemetry.a 00:01:44.944 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.944 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.944 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.944 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.944 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.944 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.944 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.944 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.944 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.944 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.944 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.944 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.944 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.944 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.944 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.944 [110/268] Linking static target lib/librte_mempool.a 00:01:44.944 [111/268] Linking static target lib/librte_net.a 00:01:44.944 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.944 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.944 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.944 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.944 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.944 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.944 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.944 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.944 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.944 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.944 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.944 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.944 [124/268] Linking static target lib/librte_rcu.a 00:01:44.944 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.944 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.944 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.944 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.944 [129/268] Linking static target lib/librte_eal.a 00:01:44.944 [130/268] Linking static target lib/librte_cmdline.a 00:01:44.944 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.944 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.944 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.944 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.944 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.944 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.944 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.944 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.203 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.203 [140/268] Linking target lib/librte_log.so.24.1 00:01:45.203 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.203 [142/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.203 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.203 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.203 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.203 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.203 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.203 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.203 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.203 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.203 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.203 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.203 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.203 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.203 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.203 [156/268] Linking static target lib/librte_dmadev.a 00:01:45.203 [157/268] Linking static target lib/librte_mbuf.a 00:01:45.203 [158/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.203 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.203 [160/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.203 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.203 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.203 [163/268] Linking static target lib/librte_compressdev.a 00:01:45.203 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.203 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.203 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.203 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.203 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.203 [169/268] Linking target lib/librte_kvargs.so.24.1 00:01:45.203 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.203 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.203 [172/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.203 [173/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.203 [174/268] Linking target lib/librte_telemetry.so.24.1 00:01:45.203 [175/268] Linking static target lib/librte_timer.a 00:01:45.203 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.203 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.203 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.203 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.203 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.203 [181/268] Linking static target lib/librte_reorder.a 00:01:45.203 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.203 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.203 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.203 [185/268] Linking static target lib/librte_power.a 00:01:45.203 [186/268] Linking static target lib/librte_security.a 00:01:45.203 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.203 [188/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.203 [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.462 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.462 [191/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.462 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.462 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.462 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.462 [195/268] Linking static target lib/librte_hash.a 00:01:45.462 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.462 [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.462 [198/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.462 [199/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.462 [200/268] Linking static target drivers/librte_bus_pci.a 00:01:45.462 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.462 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.462 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.462 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.462 [205/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.462 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:45.462 [207/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.462 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.462 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.462 [210/268] Linking static target drivers/librte_bus_vdev.a 00:01:45.462 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.721 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.721 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.721 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.721 [215/268] Linking static target lib/librte_cryptodev.a 00:01:45.721 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.979 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.979 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.979 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.979 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.979 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.979 [222/268] Linking static target lib/librte_ethdev.a 00:01:45.979 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.238 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.238 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.238 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.238 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.171 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.171 [229/268] Linking static target lib/librte_vhost.a 00:01:47.429 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.805 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.075 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.334 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.334 [234/268] Linking target lib/librte_eal.so.24.1 00:01:54.334 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:54.593 [236/268] Linking target lib/librte_timer.so.24.1 00:01:54.593 [237/268] Linking target lib/librte_ring.so.24.1 00:01:54.593 [238/268] Linking target lib/librte_meter.so.24.1 00:01:54.593 [239/268] Linking target lib/librte_pci.so.24.1 00:01:54.593 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:54.593 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:54.593 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.593 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:54.593 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:54.593 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:54.593 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:54.593 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:54.593 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:54.593 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:54.853 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:54.853 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:54.853 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:54.853 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:54.853 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:55.112 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:55.112 [256/268] Linking target lib/librte_net.so.24.1 00:01:55.112 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:55.112 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:55.112 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:55.112 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:55.112 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:55.112 [262/268] Linking target lib/librte_hash.so.24.1 00:01:55.112 [263/268] Linking target lib/librte_security.so.24.1 00:01:55.112 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:55.371 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:55.371 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.371 [267/268] Linking target lib/librte_power.so.24.1 00:01:55.371 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:55.371 INFO: autodetecting backend as ninja 00:01:55.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:56.310 CC lib/ut_mock/mock.o 00:01:56.310 CC lib/log/log.o 00:01:56.310 CC lib/log/log_flags.o 00:01:56.310 CC lib/log/log_deprecated.o 00:01:56.310 CC lib/ut/ut.o 00:01:56.570 LIB libspdk_log.a 00:01:56.570 LIB libspdk_ut_mock.a 00:01:56.570 LIB libspdk_ut.a 00:01:56.570 SO libspdk_log.so.7.0 00:01:56.570 SO libspdk_ut_mock.so.6.0 00:01:56.570 SO libspdk_ut.so.2.0 00:01:56.570 SYMLINK libspdk_log.so 00:01:56.570 SYMLINK libspdk_ut_mock.so 00:01:56.570 SYMLINK libspdk_ut.so 00:01:56.828 CC lib/ioat/ioat.o 00:01:56.829 CC lib/util/base64.o 00:01:56.829 CC lib/util/bit_array.o 00:01:56.829 CC lib/util/cpuset.o 00:01:56.829 CC lib/util/crc16.o 00:01:56.829 CC lib/util/crc32c.o 00:01:56.829 CC lib/util/crc32.o 00:01:56.829 CC lib/util/crc32_ieee.o 00:01:56.829 CC lib/util/crc64.o 00:01:56.829 CC lib/util/dif.o 00:01:56.829 CC lib/util/fd.o 00:01:56.829 CC lib/util/fd_group.o 00:01:56.829 CC lib/util/file.o 00:01:56.829 CC lib/util/hexlify.o 00:01:56.829 CC lib/util/iov.o 00:01:56.829 CC lib/util/math.o 00:01:56.829 CC lib/util/net.o 00:01:56.829 CC lib/dma/dma.o 00:01:56.829 CC lib/util/pipe.o 00:01:56.829 CC lib/util/string.o 00:01:56.829 CC lib/util/strerror_tls.o 00:01:56.829 CC lib/util/uuid.o 00:01:56.829 CXX lib/trace_parser/trace.o 00:01:56.829 CC lib/util/xor.o 00:01:56.829 CC lib/util/zipf.o 00:01:57.088 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.088 CC lib/vfio_user/host/vfio_user.o 00:01:57.088 LIB libspdk_dma.a 00:01:57.088 LIB libspdk_ioat.a 00:01:57.088 SO libspdk_dma.so.4.0 00:01:57.088 SO libspdk_ioat.so.7.0 00:01:57.088 SYMLINK libspdk_dma.so 00:01:57.088 SYMLINK libspdk_ioat.so 00:01:57.347 LIB libspdk_vfio_user.a 00:01:57.347 LIB libspdk_util.a 00:01:57.347 SO libspdk_vfio_user.so.5.0 00:01:57.347 SO libspdk_util.so.9.1 00:01:57.347 SYMLINK libspdk_vfio_user.so 00:01:57.347 SYMLINK libspdk_util.so 00:01:57.607 LIB libspdk_trace_parser.a 00:01:57.607 SO libspdk_trace_parser.so.5.0 00:01:57.607 SYMLINK libspdk_trace_parser.so 00:01:57.607 CC lib/conf/conf.o 00:01:57.865 CC lib/vmd/vmd.o 00:01:57.865 CC lib/vmd/led.o 00:01:57.865 CC lib/env_dpdk/env.o 00:01:57.865 CC lib/rdma_provider/common.o 00:01:57.865 CC lib/env_dpdk/memory.o 00:01:57.865 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:57.865 CC lib/env_dpdk/pci.o 00:01:57.865 CC lib/json/json_parse.o 00:01:57.865 CC lib/idxd/idxd.o 00:01:57.865 CC lib/rdma_utils/rdma_utils.o 00:01:57.865 CC lib/env_dpdk/init.o 00:01:57.865 CC lib/json/json_util.o 00:01:57.865 CC lib/idxd/idxd_user.o 00:01:57.865 CC lib/json/json_write.o 00:01:57.865 CC lib/env_dpdk/threads.o 00:01:57.865 CC lib/idxd/idxd_kernel.o 00:01:57.865 CC lib/env_dpdk/pci_ioat.o 00:01:57.865 CC lib/env_dpdk/pci_virtio.o 00:01:57.865 CC lib/env_dpdk/pci_vmd.o 00:01:57.865 CC lib/env_dpdk/pci_idxd.o 00:01:57.865 CC lib/env_dpdk/pci_event.o 00:01:57.865 CC lib/env_dpdk/pci_dpdk.o 00:01:57.865 CC lib/env_dpdk/sigbus_handler.o 00:01:57.865 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.865 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.865 LIB libspdk_rdma_provider.a 00:01:57.865 LIB libspdk_conf.a 00:01:57.865 SO libspdk_rdma_provider.so.6.0 00:01:57.865 SO libspdk_conf.so.6.0 00:01:58.124 LIB libspdk_rdma_utils.a 00:01:58.124 LIB libspdk_json.a 00:01:58.124 SYMLINK libspdk_rdma_provider.so 00:01:58.124 SYMLINK libspdk_conf.so 00:01:58.124 SO libspdk_rdma_utils.so.1.0 00:01:58.124 SO libspdk_json.so.6.0 00:01:58.124 SYMLINK libspdk_rdma_utils.so 00:01:58.124 SYMLINK libspdk_json.so 00:01:58.124 LIB libspdk_idxd.a 00:01:58.124 LIB libspdk_vmd.a 00:01:58.124 SO libspdk_idxd.so.12.0 00:01:58.383 SO libspdk_vmd.so.6.0 00:01:58.383 SYMLINK libspdk_idxd.so 00:01:58.383 SYMLINK libspdk_vmd.so 00:01:58.383 CC lib/jsonrpc/jsonrpc_server.o 00:01:58.383 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:58.383 CC lib/jsonrpc/jsonrpc_client.o 00:01:58.383 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.643 LIB libspdk_jsonrpc.a 00:01:58.643 SO libspdk_jsonrpc.so.6.0 00:01:58.643 SYMLINK libspdk_jsonrpc.so 00:01:58.904 LIB libspdk_env_dpdk.a 00:01:58.904 SO libspdk_env_dpdk.so.15.0 00:01:58.904 SYMLINK libspdk_env_dpdk.so 00:01:58.904 CC lib/rpc/rpc.o 00:01:59.225 LIB libspdk_rpc.a 00:01:59.225 SO libspdk_rpc.so.6.0 00:01:59.225 SYMLINK libspdk_rpc.so 00:01:59.484 CC lib/keyring/keyring.o 00:01:59.484 CC lib/keyring/keyring_rpc.o 00:01:59.484 CC lib/trace/trace.o 00:01:59.484 CC lib/trace/trace_flags.o 00:01:59.484 CC lib/trace/trace_rpc.o 00:01:59.484 CC lib/notify/notify.o 00:01:59.484 CC lib/notify/notify_rpc.o 00:01:59.742 LIB libspdk_keyring.a 00:01:59.742 LIB libspdk_notify.a 00:01:59.742 LIB libspdk_trace.a 00:01:59.742 SO libspdk_keyring.so.1.0 00:01:59.742 SO libspdk_notify.so.6.0 00:01:59.742 SO libspdk_trace.so.10.0 00:01:59.742 SYMLINK libspdk_keyring.so 00:01:59.742 SYMLINK libspdk_notify.so 00:01:59.742 SYMLINK libspdk_trace.so 00:02:00.000 CC lib/sock/sock.o 00:02:00.000 CC lib/sock/sock_rpc.o 00:02:00.000 CC lib/thread/thread.o 00:02:00.000 CC lib/thread/iobuf.o 00:02:00.259 LIB libspdk_sock.a 00:02:00.519 SO libspdk_sock.so.10.0 00:02:00.519 SYMLINK libspdk_sock.so 00:02:00.777 CC lib/nvme/nvme_ctrlr.o 00:02:00.777 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.777 CC lib/nvme/nvme_fabric.o 00:02:00.777 CC lib/nvme/nvme_ns_cmd.o 00:02:00.777 CC lib/nvme/nvme_ns.o 00:02:00.777 CC lib/nvme/nvme_pcie_common.o 00:02:00.777 CC lib/nvme/nvme_pcie.o 00:02:00.777 CC lib/nvme/nvme_quirks.o 00:02:00.777 CC lib/nvme/nvme_qpair.o 00:02:00.777 CC lib/nvme/nvme.o 00:02:00.777 CC lib/nvme/nvme_transport.o 00:02:00.777 CC lib/nvme/nvme_discovery.o 00:02:00.777 CC lib/nvme/nvme_tcp.o 00:02:00.777 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.777 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.777 CC lib/nvme/nvme_opal.o 00:02:00.777 CC lib/nvme/nvme_io_msg.o 00:02:00.777 CC lib/nvme/nvme_poll_group.o 00:02:00.777 CC lib/nvme/nvme_zns.o 00:02:00.777 CC lib/nvme/nvme_stubs.o 00:02:00.777 CC lib/nvme/nvme_auth.o 00:02:00.777 CC lib/nvme/nvme_cuse.o 00:02:00.777 CC lib/nvme/nvme_vfio_user.o 00:02:00.777 CC lib/nvme/nvme_rdma.o 00:02:01.035 LIB libspdk_thread.a 00:02:01.294 SO libspdk_thread.so.10.1 00:02:01.294 SYMLINK libspdk_thread.so 00:02:01.553 CC lib/vfu_tgt/tgt_endpoint.o 00:02:01.553 CC lib/vfu_tgt/tgt_rpc.o 00:02:01.553 CC lib/accel/accel.o 00:02:01.553 CC lib/accel/accel_sw.o 00:02:01.553 CC lib/accel/accel_rpc.o 00:02:01.553 CC lib/virtio/virtio.o 00:02:01.553 CC lib/virtio/virtio_vhost_user.o 00:02:01.553 CC lib/init/json_config.o 00:02:01.553 CC lib/virtio/virtio_vfio_user.o 00:02:01.553 CC lib/init/subsystem.o 00:02:01.553 CC lib/virtio/virtio_pci.o 00:02:01.553 CC lib/init/subsystem_rpc.o 00:02:01.553 CC lib/init/rpc.o 00:02:01.553 CC lib/blob/zeroes.o 00:02:01.553 CC lib/blob/blobstore.o 00:02:01.553 CC lib/blob/request.o 00:02:01.553 CC lib/blob/blob_bs_dev.o 00:02:01.812 LIB libspdk_init.a 00:02:01.812 SO libspdk_init.so.5.0 00:02:01.812 LIB libspdk_vfu_tgt.a 00:02:01.812 LIB libspdk_virtio.a 00:02:01.812 SO libspdk_vfu_tgt.so.3.0 00:02:01.812 SYMLINK libspdk_init.so 00:02:01.812 SO libspdk_virtio.so.7.0 00:02:01.812 SYMLINK libspdk_vfu_tgt.so 00:02:01.812 SYMLINK libspdk_virtio.so 00:02:02.070 CC lib/event/app.o 00:02:02.070 CC lib/event/reactor.o 00:02:02.070 CC lib/event/app_rpc.o 00:02:02.070 CC lib/event/log_rpc.o 00:02:02.070 CC lib/event/scheduler_static.o 00:02:02.330 LIB libspdk_accel.a 00:02:02.330 SO libspdk_accel.so.15.1 00:02:02.330 LIB libspdk_nvme.a 00:02:02.330 SYMLINK libspdk_accel.so 00:02:02.330 LIB libspdk_event.a 00:02:02.330 SO libspdk_nvme.so.13.1 00:02:02.330 SO libspdk_event.so.14.0 00:02:02.589 SYMLINK libspdk_event.so 00:02:02.589 CC lib/bdev/bdev.o 00:02:02.589 CC lib/bdev/bdev_rpc.o 00:02:02.589 CC lib/bdev/scsi_nvme.o 00:02:02.589 CC lib/bdev/bdev_zone.o 00:02:02.589 CC lib/bdev/part.o 00:02:02.589 SYMLINK libspdk_nvme.so 00:02:03.527 LIB libspdk_blob.a 00:02:03.527 SO libspdk_blob.so.11.0 00:02:03.787 SYMLINK libspdk_blob.so 00:02:04.046 CC lib/lvol/lvol.o 00:02:04.046 CC lib/blobfs/blobfs.o 00:02:04.046 CC lib/blobfs/tree.o 00:02:04.305 LIB libspdk_bdev.a 00:02:04.305 SO libspdk_bdev.so.15.1 00:02:04.564 SYMLINK libspdk_bdev.so 00:02:04.564 LIB libspdk_blobfs.a 00:02:04.564 LIB libspdk_lvol.a 00:02:04.564 SO libspdk_blobfs.so.10.0 00:02:04.564 SO libspdk_lvol.so.10.0 00:02:04.564 SYMLINK libspdk_blobfs.so 00:02:04.564 SYMLINK libspdk_lvol.so 00:02:04.824 CC lib/ftl/ftl_core.o 00:02:04.824 CC lib/ftl/ftl_init.o 00:02:04.824 CC lib/ftl/ftl_layout.o 00:02:04.824 CC lib/ftl/ftl_debug.o 00:02:04.824 CC lib/nbd/nbd.o 00:02:04.824 CC lib/ublk/ublk.o 00:02:04.824 CC lib/ftl/ftl_io.o 00:02:04.824 CC lib/ublk/ublk_rpc.o 00:02:04.824 CC lib/nbd/nbd_rpc.o 00:02:04.824 CC lib/ftl/ftl_sb.o 00:02:04.824 CC lib/ftl/ftl_l2p.o 00:02:04.824 CC lib/ftl/ftl_l2p_flat.o 00:02:04.824 CC lib/ftl/ftl_nv_cache.o 00:02:04.824 CC lib/nvmf/ctrlr.o 00:02:04.824 CC lib/ftl/ftl_band.o 00:02:04.824 CC lib/nvmf/ctrlr_discovery.o 00:02:04.824 CC lib/ftl/ftl_band_ops.o 00:02:04.824 CC lib/nvmf/nvmf.o 00:02:04.824 CC lib/ftl/ftl_writer.o 00:02:04.824 CC lib/nvmf/ctrlr_bdev.o 00:02:04.824 CC lib/ftl/ftl_reloc.o 00:02:04.824 CC lib/nvmf/nvmf_rpc.o 00:02:04.824 CC lib/ftl/ftl_rq.o 00:02:04.824 CC lib/nvmf/subsystem.o 00:02:04.824 CC lib/ftl/ftl_l2p_cache.o 00:02:04.824 CC lib/nvmf/transport.o 00:02:04.824 CC lib/ftl/ftl_p2l.o 00:02:04.824 CC lib/nvmf/tcp.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.824 CC lib/nvmf/stubs.o 00:02:04.824 CC lib/scsi/dev.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.824 CC lib/scsi/lun.o 00:02:04.824 CC lib/nvmf/mdns_server.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.824 CC lib/nvmf/vfio_user.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.824 CC lib/scsi/scsi.o 00:02:04.824 CC lib/nvmf/rdma.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.824 CC lib/scsi/port.o 00:02:04.824 CC lib/scsi/scsi_bdev.o 00:02:04.824 CC lib/nvmf/auth.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.824 CC lib/scsi/scsi_pr.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.824 CC lib/scsi/scsi_rpc.o 00:02:04.824 CC lib/scsi/task.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.824 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.825 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.825 CC lib/ftl/utils/ftl_md.o 00:02:04.825 CC lib/ftl/utils/ftl_conf.o 00:02:04.825 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.825 CC lib/ftl/utils/ftl_mempool.o 00:02:04.825 CC lib/ftl/utils/ftl_property.o 00:02:04.825 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.825 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.825 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.825 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.825 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.825 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.825 CC lib/ftl/base/ftl_base_dev.o 00:02:04.825 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.825 CC lib/ftl/ftl_trace.o 00:02:05.391 LIB libspdk_nbd.a 00:02:05.391 SO libspdk_nbd.so.7.0 00:02:05.391 SYMLINK libspdk_nbd.so 00:02:05.391 LIB libspdk_scsi.a 00:02:05.391 SO libspdk_scsi.so.9.0 00:02:05.391 SYMLINK libspdk_scsi.so 00:02:05.649 LIB libspdk_ublk.a 00:02:05.649 SO libspdk_ublk.so.3.0 00:02:05.649 SYMLINK libspdk_ublk.so 00:02:05.649 LIB libspdk_ftl.a 00:02:05.649 CC lib/vhost/vhost_rpc.o 00:02:05.649 CC lib/vhost/vhost.o 00:02:05.649 CC lib/vhost/vhost_scsi.o 00:02:05.649 CC lib/vhost/vhost_blk.o 00:02:05.649 CC lib/vhost/rte_vhost_user.o 00:02:05.649 CC lib/iscsi/init_grp.o 00:02:05.649 CC lib/iscsi/conn.o 00:02:05.649 CC lib/iscsi/md5.o 00:02:05.649 CC lib/iscsi/iscsi.o 00:02:05.649 CC lib/iscsi/portal_grp.o 00:02:05.649 CC lib/iscsi/param.o 00:02:05.649 CC lib/iscsi/iscsi_subsystem.o 00:02:05.649 CC lib/iscsi/tgt_node.o 00:02:05.649 CC lib/iscsi/iscsi_rpc.o 00:02:05.649 CC lib/iscsi/task.o 00:02:05.907 SO libspdk_ftl.so.9.0 00:02:06.165 SYMLINK libspdk_ftl.so 00:02:06.422 LIB libspdk_nvmf.a 00:02:06.422 SO libspdk_nvmf.so.19.0 00:02:06.681 LIB libspdk_vhost.a 00:02:06.681 SO libspdk_vhost.so.8.0 00:02:06.681 SYMLINK libspdk_nvmf.so 00:02:06.681 SYMLINK libspdk_vhost.so 00:02:06.681 LIB libspdk_iscsi.a 00:02:06.939 SO libspdk_iscsi.so.8.0 00:02:06.939 SYMLINK libspdk_iscsi.so 00:02:07.507 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.507 CC module/vfu_device/vfu_virtio_scsi.o 00:02:07.507 CC module/vfu_device/vfu_virtio.o 00:02:07.507 CC module/vfu_device/vfu_virtio_blk.o 00:02:07.507 CC module/vfu_device/vfu_virtio_rpc.o 00:02:07.507 CC module/keyring/linux/keyring.o 00:02:07.507 CC module/keyring/linux/keyring_rpc.o 00:02:07.507 LIB libspdk_env_dpdk_rpc.a 00:02:07.507 CC module/keyring/file/keyring.o 00:02:07.507 CC module/keyring/file/keyring_rpc.o 00:02:07.507 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.507 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.507 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.507 CC module/accel/dsa/accel_dsa.o 00:02:07.507 CC module/accel/error/accel_error.o 00:02:07.507 CC module/accel/error/accel_error_rpc.o 00:02:07.507 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.507 CC module/accel/ioat/accel_ioat.o 00:02:07.507 CC module/blob/bdev/blob_bdev.o 00:02:07.507 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.507 CC module/accel/iaa/accel_iaa.o 00:02:07.507 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.507 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.507 CC module/sock/posix/posix.o 00:02:07.507 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.766 LIB libspdk_keyring_linux.a 00:02:07.766 LIB libspdk_keyring_file.a 00:02:07.766 SO libspdk_keyring_linux.so.1.0 00:02:07.766 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.766 LIB libspdk_scheduler_gscheduler.a 00:02:07.766 SO libspdk_keyring_file.so.1.0 00:02:07.766 LIB libspdk_accel_error.a 00:02:07.766 LIB libspdk_accel_ioat.a 00:02:07.766 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.766 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.766 LIB libspdk_scheduler_dynamic.a 00:02:07.766 SYMLINK libspdk_keyring_linux.so 00:02:07.766 LIB libspdk_accel_iaa.a 00:02:07.766 SO libspdk_accel_error.so.2.0 00:02:07.766 SO libspdk_accel_ioat.so.6.0 00:02:07.766 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.766 SYMLINK libspdk_keyring_file.so 00:02:07.766 LIB libspdk_accel_dsa.a 00:02:07.766 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.766 LIB libspdk_blob_bdev.a 00:02:07.766 SO libspdk_accel_iaa.so.3.0 00:02:07.766 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.766 SYMLINK libspdk_accel_error.so 00:02:07.766 SO libspdk_accel_dsa.so.5.0 00:02:07.766 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.766 SO libspdk_blob_bdev.so.11.0 00:02:07.766 SYMLINK libspdk_accel_ioat.so 00:02:07.766 SYMLINK libspdk_accel_iaa.so 00:02:07.766 LIB libspdk_vfu_device.a 00:02:07.766 SYMLINK libspdk_accel_dsa.so 00:02:07.766 SYMLINK libspdk_blob_bdev.so 00:02:08.025 SO libspdk_vfu_device.so.3.0 00:02:08.025 SYMLINK libspdk_vfu_device.so 00:02:08.025 LIB libspdk_sock_posix.a 00:02:08.284 SO libspdk_sock_posix.so.6.0 00:02:08.284 SYMLINK libspdk_sock_posix.so 00:02:08.284 CC module/bdev/error/vbdev_error.o 00:02:08.284 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.284 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.284 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.284 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.284 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.284 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.284 CC module/bdev/split/vbdev_split.o 00:02:08.284 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.284 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.284 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.284 CC module/bdev/malloc/bdev_malloc.o 00:02:08.284 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.284 CC module/bdev/delay/vbdev_delay.o 00:02:08.284 CC module/bdev/null/bdev_null.o 00:02:08.284 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.284 CC module/bdev/null/bdev_null_rpc.o 00:02:08.284 CC module/bdev/raid/bdev_raid.o 00:02:08.284 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.284 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.284 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.284 CC module/bdev/nvme/bdev_nvme.o 00:02:08.284 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.284 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.284 CC module/bdev/nvme/nvme_rpc.o 00:02:08.284 CC module/bdev/gpt/gpt.o 00:02:08.284 CC module/bdev/raid/raid1.o 00:02:08.284 CC module/bdev/raid/concat.o 00:02:08.284 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.284 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.284 CC module/bdev/raid/raid0.o 00:02:08.284 CC module/bdev/nvme/vbdev_opal.o 00:02:08.284 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.284 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.284 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.284 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.284 CC module/bdev/ftl/bdev_ftl.o 00:02:08.284 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.284 CC module/bdev/aio/bdev_aio.o 00:02:08.284 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.284 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.284 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.543 LIB libspdk_blobfs_bdev.a 00:02:08.543 LIB libspdk_bdev_error.a 00:02:08.543 SO libspdk_blobfs_bdev.so.6.0 00:02:08.543 LIB libspdk_bdev_null.a 00:02:08.543 LIB libspdk_bdev_split.a 00:02:08.543 SO libspdk_bdev_error.so.6.0 00:02:08.543 SO libspdk_bdev_null.so.6.0 00:02:08.543 LIB libspdk_bdev_gpt.a 00:02:08.543 SO libspdk_bdev_split.so.6.0 00:02:08.543 LIB libspdk_bdev_zone_block.a 00:02:08.543 LIB libspdk_bdev_ftl.a 00:02:08.543 LIB libspdk_bdev_passthru.a 00:02:08.801 SYMLINK libspdk_blobfs_bdev.so 00:02:08.801 LIB libspdk_bdev_iscsi.a 00:02:08.801 SO libspdk_bdev_gpt.so.6.0 00:02:08.801 LIB libspdk_bdev_aio.a 00:02:08.801 SYMLINK libspdk_bdev_error.so 00:02:08.801 SO libspdk_bdev_ftl.so.6.0 00:02:08.801 SYMLINK libspdk_bdev_split.so 00:02:08.801 SYMLINK libspdk_bdev_null.so 00:02:08.801 SO libspdk_bdev_zone_block.so.6.0 00:02:08.801 SO libspdk_bdev_iscsi.so.6.0 00:02:08.801 SO libspdk_bdev_passthru.so.6.0 00:02:08.801 LIB libspdk_bdev_delay.a 00:02:08.801 SO libspdk_bdev_aio.so.6.0 00:02:08.801 LIB libspdk_bdev_malloc.a 00:02:08.801 SYMLINK libspdk_bdev_gpt.so 00:02:08.801 SO libspdk_bdev_delay.so.6.0 00:02:08.801 SYMLINK libspdk_bdev_ftl.so 00:02:08.801 SYMLINK libspdk_bdev_iscsi.so 00:02:08.801 SYMLINK libspdk_bdev_zone_block.so 00:02:08.801 SO libspdk_bdev_malloc.so.6.0 00:02:08.801 SYMLINK libspdk_bdev_passthru.so 00:02:08.801 SYMLINK libspdk_bdev_aio.so 00:02:08.801 LIB libspdk_bdev_lvol.a 00:02:08.801 LIB libspdk_bdev_virtio.a 00:02:08.801 SYMLINK libspdk_bdev_delay.so 00:02:08.801 SO libspdk_bdev_lvol.so.6.0 00:02:08.801 SO libspdk_bdev_virtio.so.6.0 00:02:08.801 SYMLINK libspdk_bdev_malloc.so 00:02:08.801 SYMLINK libspdk_bdev_lvol.so 00:02:08.801 SYMLINK libspdk_bdev_virtio.so 00:02:09.060 LIB libspdk_bdev_raid.a 00:02:09.060 SO libspdk_bdev_raid.so.6.0 00:02:09.319 SYMLINK libspdk_bdev_raid.so 00:02:09.886 LIB libspdk_bdev_nvme.a 00:02:10.144 SO libspdk_bdev_nvme.so.7.0 00:02:10.144 SYMLINK libspdk_bdev_nvme.so 00:02:10.711 CC module/event/subsystems/keyring/keyring.o 00:02:10.711 CC module/event/subsystems/sock/sock.o 00:02:10.711 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.711 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.711 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.711 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.711 CC module/event/subsystems/vmd/vmd.o 00:02:10.711 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.711 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.711 LIB libspdk_event_keyring.a 00:02:10.711 LIB libspdk_event_scheduler.a 00:02:10.711 LIB libspdk_event_vhost_blk.a 00:02:10.711 LIB libspdk_event_sock.a 00:02:10.711 SO libspdk_event_keyring.so.1.0 00:02:10.711 SO libspdk_event_scheduler.so.4.0 00:02:10.711 LIB libspdk_event_iobuf.a 00:02:10.970 LIB libspdk_event_vmd.a 00:02:10.970 SO libspdk_event_sock.so.5.0 00:02:10.970 LIB libspdk_event_vfu_tgt.a 00:02:10.970 SO libspdk_event_vhost_blk.so.3.0 00:02:10.970 SO libspdk_event_iobuf.so.3.0 00:02:10.970 SO libspdk_event_vmd.so.6.0 00:02:10.970 SYMLINK libspdk_event_scheduler.so 00:02:10.970 SO libspdk_event_vfu_tgt.so.3.0 00:02:10.970 SYMLINK libspdk_event_keyring.so 00:02:10.970 SYMLINK libspdk_event_sock.so 00:02:10.970 SYMLINK libspdk_event_iobuf.so 00:02:10.970 SYMLINK libspdk_event_vhost_blk.so 00:02:10.970 SYMLINK libspdk_event_vmd.so 00:02:10.970 SYMLINK libspdk_event_vfu_tgt.so 00:02:11.229 CC module/event/subsystems/accel/accel.o 00:02:11.229 LIB libspdk_event_accel.a 00:02:11.229 SO libspdk_event_accel.so.6.0 00:02:11.488 SYMLINK libspdk_event_accel.so 00:02:11.747 CC module/event/subsystems/bdev/bdev.o 00:02:11.747 LIB libspdk_event_bdev.a 00:02:11.747 SO libspdk_event_bdev.so.6.0 00:02:12.006 SYMLINK libspdk_event_bdev.so 00:02:12.263 CC module/event/subsystems/nbd/nbd.o 00:02:12.263 CC module/event/subsystems/scsi/scsi.o 00:02:12.263 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.263 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.263 CC module/event/subsystems/ublk/ublk.o 00:02:12.263 LIB libspdk_event_nbd.a 00:02:12.263 SO libspdk_event_nbd.so.6.0 00:02:12.263 LIB libspdk_event_ublk.a 00:02:12.263 LIB libspdk_event_scsi.a 00:02:12.263 SYMLINK libspdk_event_nbd.so 00:02:12.263 SO libspdk_event_ublk.so.3.0 00:02:12.263 SO libspdk_event_scsi.so.6.0 00:02:12.263 LIB libspdk_event_nvmf.a 00:02:12.521 SYMLINK libspdk_event_ublk.so 00:02:12.521 SO libspdk_event_nvmf.so.6.0 00:02:12.521 SYMLINK libspdk_event_scsi.so 00:02:12.521 SYMLINK libspdk_event_nvmf.so 00:02:12.780 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.780 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.780 LIB libspdk_event_iscsi.a 00:02:12.780 LIB libspdk_event_vhost_scsi.a 00:02:12.780 SO libspdk_event_iscsi.so.6.0 00:02:12.780 SO libspdk_event_vhost_scsi.so.3.0 00:02:13.039 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.039 SYMLINK libspdk_event_iscsi.so 00:02:13.039 SO libspdk.so.6.0 00:02:13.039 SYMLINK libspdk.so 00:02:13.640 TEST_HEADER include/spdk/accel.h 00:02:13.640 TEST_HEADER include/spdk/assert.h 00:02:13.640 TEST_HEADER include/spdk/barrier.h 00:02:13.640 TEST_HEADER include/spdk/accel_module.h 00:02:13.640 TEST_HEADER include/spdk/base64.h 00:02:13.640 TEST_HEADER include/spdk/bdev_module.h 00:02:13.640 CC app/spdk_nvme_perf/perf.o 00:02:13.640 TEST_HEADER include/spdk/bdev.h 00:02:13.640 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.640 TEST_HEADER include/spdk/bit_array.h 00:02:13.640 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.640 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.640 TEST_HEADER include/spdk/bit_pool.h 00:02:13.640 TEST_HEADER include/spdk/blobfs.h 00:02:13.640 TEST_HEADER include/spdk/blob.h 00:02:13.640 TEST_HEADER include/spdk/conf.h 00:02:13.640 CC test/rpc_client/rpc_client_test.o 00:02:13.640 TEST_HEADER include/spdk/config.h 00:02:13.640 CXX app/trace/trace.o 00:02:13.640 CC app/spdk_nvme_identify/identify.o 00:02:13.640 CC app/spdk_top/spdk_top.o 00:02:13.640 TEST_HEADER include/spdk/crc32.h 00:02:13.640 TEST_HEADER include/spdk/cpuset.h 00:02:13.640 CC app/spdk_lspci/spdk_lspci.o 00:02:13.640 TEST_HEADER include/spdk/crc16.h 00:02:13.640 CC app/trace_record/trace_record.o 00:02:13.640 TEST_HEADER include/spdk/crc64.h 00:02:13.640 TEST_HEADER include/spdk/dif.h 00:02:13.640 TEST_HEADER include/spdk/dma.h 00:02:13.640 TEST_HEADER include/spdk/endian.h 00:02:13.640 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.640 TEST_HEADER include/spdk/env.h 00:02:13.640 TEST_HEADER include/spdk/event.h 00:02:13.640 TEST_HEADER include/spdk/fd_group.h 00:02:13.640 TEST_HEADER include/spdk/file.h 00:02:13.640 TEST_HEADER include/spdk/fd.h 00:02:13.640 TEST_HEADER include/spdk/ftl.h 00:02:13.640 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.640 TEST_HEADER include/spdk/idxd.h 00:02:13.640 TEST_HEADER include/spdk/hexlify.h 00:02:13.640 TEST_HEADER include/spdk/histogram_data.h 00:02:13.640 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.640 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.640 TEST_HEADER include/spdk/init.h 00:02:13.640 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.640 TEST_HEADER include/spdk/ioat.h 00:02:13.640 TEST_HEADER include/spdk/json.h 00:02:13.640 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.640 TEST_HEADER include/spdk/keyring.h 00:02:13.640 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.640 TEST_HEADER include/spdk/keyring_module.h 00:02:13.640 TEST_HEADER include/spdk/likely.h 00:02:13.640 TEST_HEADER include/spdk/log.h 00:02:13.640 TEST_HEADER include/spdk/memory.h 00:02:13.640 TEST_HEADER include/spdk/lvol.h 00:02:13.640 TEST_HEADER include/spdk/mmio.h 00:02:13.640 TEST_HEADER include/spdk/nbd.h 00:02:13.640 TEST_HEADER include/spdk/net.h 00:02:13.640 TEST_HEADER include/spdk/notify.h 00:02:13.640 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.640 TEST_HEADER include/spdk/nvme.h 00:02:13.640 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.640 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.640 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.640 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.640 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.640 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.640 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.640 TEST_HEADER include/spdk/nvmf.h 00:02:13.640 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.640 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.640 CC app/nvmf_tgt/nvmf_main.o 00:02:13.640 TEST_HEADER include/spdk/pipe.h 00:02:13.640 TEST_HEADER include/spdk/opal.h 00:02:13.640 TEST_HEADER include/spdk/opal_spec.h 00:02:13.640 TEST_HEADER include/spdk/queue.h 00:02:13.640 TEST_HEADER include/spdk/pci_ids.h 00:02:13.640 TEST_HEADER include/spdk/reduce.h 00:02:13.640 TEST_HEADER include/spdk/rpc.h 00:02:13.640 TEST_HEADER include/spdk/scheduler.h 00:02:13.640 TEST_HEADER include/spdk/scsi.h 00:02:13.640 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.640 TEST_HEADER include/spdk/string.h 00:02:13.640 TEST_HEADER include/spdk/thread.h 00:02:13.640 TEST_HEADER include/spdk/stdinc.h 00:02:13.640 TEST_HEADER include/spdk/sock.h 00:02:13.640 TEST_HEADER include/spdk/trace_parser.h 00:02:13.640 TEST_HEADER include/spdk/trace.h 00:02:13.640 TEST_HEADER include/spdk/tree.h 00:02:13.640 TEST_HEADER include/spdk/ublk.h 00:02:13.640 TEST_HEADER include/spdk/util.h 00:02:13.640 TEST_HEADER include/spdk/version.h 00:02:13.641 TEST_HEADER include/spdk/uuid.h 00:02:13.641 CC app/spdk_dd/spdk_dd.o 00:02:13.641 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.641 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.641 TEST_HEADER include/spdk/vhost.h 00:02:13.641 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.641 TEST_HEADER include/spdk/vmd.h 00:02:13.641 TEST_HEADER include/spdk/xor.h 00:02:13.641 TEST_HEADER include/spdk/zipf.h 00:02:13.641 CXX test/cpp_headers/accel_module.o 00:02:13.641 CXX test/cpp_headers/barrier.o 00:02:13.641 CXX test/cpp_headers/assert.o 00:02:13.641 CXX test/cpp_headers/accel.o 00:02:13.641 CXX test/cpp_headers/base64.o 00:02:13.641 CXX test/cpp_headers/bdev.o 00:02:13.641 CXX test/cpp_headers/bdev_module.o 00:02:13.641 CXX test/cpp_headers/bdev_zone.o 00:02:13.641 CXX test/cpp_headers/bit_array.o 00:02:13.641 CXX test/cpp_headers/bit_pool.o 00:02:13.641 CXX test/cpp_headers/blob_bdev.o 00:02:13.641 CXX test/cpp_headers/blobfs.o 00:02:13.641 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.641 CXX test/cpp_headers/blob.o 00:02:13.641 CXX test/cpp_headers/conf.o 00:02:13.641 CXX test/cpp_headers/cpuset.o 00:02:13.641 CXX test/cpp_headers/config.o 00:02:13.641 CXX test/cpp_headers/crc16.o 00:02:13.641 CXX test/cpp_headers/crc32.o 00:02:13.641 CXX test/cpp_headers/dif.o 00:02:13.641 CXX test/cpp_headers/crc64.o 00:02:13.641 CXX test/cpp_headers/dma.o 00:02:13.641 CXX test/cpp_headers/endian.o 00:02:13.641 CXX test/cpp_headers/env_dpdk.o 00:02:13.641 CC app/spdk_tgt/spdk_tgt.o 00:02:13.641 CXX test/cpp_headers/env.o 00:02:13.641 CXX test/cpp_headers/fd_group.o 00:02:13.641 CXX test/cpp_headers/event.o 00:02:13.641 CXX test/cpp_headers/fd.o 00:02:13.641 CXX test/cpp_headers/file.o 00:02:13.641 CXX test/cpp_headers/gpt_spec.o 00:02:13.641 CXX test/cpp_headers/ftl.o 00:02:13.641 CXX test/cpp_headers/histogram_data.o 00:02:13.641 CXX test/cpp_headers/hexlify.o 00:02:13.641 CXX test/cpp_headers/idxd_spec.o 00:02:13.641 CXX test/cpp_headers/idxd.o 00:02:13.641 CXX test/cpp_headers/init.o 00:02:13.641 CXX test/cpp_headers/ioat.o 00:02:13.641 CXX test/cpp_headers/iscsi_spec.o 00:02:13.641 CXX test/cpp_headers/ioat_spec.o 00:02:13.641 CXX test/cpp_headers/json.o 00:02:13.641 CXX test/cpp_headers/jsonrpc.o 00:02:13.641 CXX test/cpp_headers/keyring_module.o 00:02:13.641 CXX test/cpp_headers/keyring.o 00:02:13.641 CXX test/cpp_headers/likely.o 00:02:13.641 CXX test/cpp_headers/log.o 00:02:13.641 CXX test/cpp_headers/lvol.o 00:02:13.641 CXX test/cpp_headers/memory.o 00:02:13.641 CXX test/cpp_headers/mmio.o 00:02:13.641 CXX test/cpp_headers/net.o 00:02:13.641 CXX test/cpp_headers/nbd.o 00:02:13.641 CXX test/cpp_headers/nvme.o 00:02:13.641 CXX test/cpp_headers/notify.o 00:02:13.641 CXX test/cpp_headers/nvme_intel.o 00:02:13.641 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.641 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.641 CXX test/cpp_headers/nvme_spec.o 00:02:13.641 CXX test/cpp_headers/nvme_zns.o 00:02:13.641 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.641 CXX test/cpp_headers/nvmf.o 00:02:13.641 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.641 CXX test/cpp_headers/nvmf_spec.o 00:02:13.641 CXX test/cpp_headers/nvmf_transport.o 00:02:13.641 CXX test/cpp_headers/opal.o 00:02:13.641 CXX test/cpp_headers/opal_spec.o 00:02:13.641 CXX test/cpp_headers/pci_ids.o 00:02:13.641 CXX test/cpp_headers/pipe.o 00:02:13.641 CXX test/cpp_headers/queue.o 00:02:13.641 CC test/app/histogram_perf/histogram_perf.o 00:02:13.641 CC test/app/stub/stub.o 00:02:13.641 CC examples/ioat/verify/verify.o 00:02:13.641 CC examples/util/zipf/zipf.o 00:02:13.641 CC test/app/jsoncat/jsoncat.o 00:02:13.641 CC test/thread/poller_perf/poller_perf.o 00:02:13.641 CC test/env/vtophys/vtophys.o 00:02:13.641 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.641 CC examples/ioat/perf/perf.o 00:02:13.641 CC test/env/pci/pci_ut.o 00:02:13.641 CC test/env/memory/memory_ut.o 00:02:13.641 CC test/dma/test_dma/test_dma.o 00:02:13.641 CC app/fio/nvme/fio_plugin.o 00:02:13.641 CC test/app/bdev_svc/bdev_svc.o 00:02:13.907 CC app/fio/bdev/fio_plugin.o 00:02:13.907 LINK rpc_client_test 00:02:13.907 LINK spdk_lspci 00:02:13.907 LINK interrupt_tgt 00:02:13.907 LINK nvmf_tgt 00:02:13.907 LINK spdk_nvme_discover 00:02:14.168 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.168 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.168 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.168 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.168 LINK histogram_perf 00:02:14.168 LINK iscsi_tgt 00:02:14.168 LINK poller_perf 00:02:14.168 CXX test/cpp_headers/reduce.o 00:02:14.168 CXX test/cpp_headers/rpc.o 00:02:14.168 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.168 LINK zipf 00:02:14.168 CXX test/cpp_headers/scheduler.o 00:02:14.168 CXX test/cpp_headers/scsi.o 00:02:14.168 CXX test/cpp_headers/scsi_spec.o 00:02:14.168 CXX test/cpp_headers/sock.o 00:02:14.168 LINK vtophys 00:02:14.169 CXX test/cpp_headers/stdinc.o 00:02:14.169 LINK jsoncat 00:02:14.169 CXX test/cpp_headers/string.o 00:02:14.169 CXX test/cpp_headers/thread.o 00:02:14.169 CXX test/cpp_headers/trace.o 00:02:14.169 LINK stub 00:02:14.169 CXX test/cpp_headers/trace_parser.o 00:02:14.169 CXX test/cpp_headers/tree.o 00:02:14.169 LINK spdk_trace_record 00:02:14.169 CXX test/cpp_headers/ublk.o 00:02:14.169 LINK env_dpdk_post_init 00:02:14.169 CXX test/cpp_headers/util.o 00:02:14.169 CXX test/cpp_headers/uuid.o 00:02:14.169 CXX test/cpp_headers/version.o 00:02:14.169 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.169 CXX test/cpp_headers/vhost.o 00:02:14.169 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.169 CXX test/cpp_headers/vmd.o 00:02:14.169 CXX test/cpp_headers/xor.o 00:02:14.169 CXX test/cpp_headers/zipf.o 00:02:14.169 LINK verify 00:02:14.169 LINK spdk_tgt 00:02:14.169 LINK ioat_perf 00:02:14.169 LINK bdev_svc 00:02:14.426 LINK spdk_dd 00:02:14.426 LINK spdk_trace 00:02:14.426 LINK test_dma 00:02:14.426 LINK pci_ut 00:02:14.426 LINK nvme_fuzz 00:02:14.682 LINK spdk_bdev 00:02:14.682 LINK spdk_nvme 00:02:14.682 LINK vhost_fuzz 00:02:14.682 CC test/event/reactor/reactor.o 00:02:14.682 CC test/event/reactor_perf/reactor_perf.o 00:02:14.682 CC test/event/event_perf/event_perf.o 00:02:14.682 CC examples/sock/hello_world/hello_sock.o 00:02:14.682 CC test/event/scheduler/scheduler.o 00:02:14.682 CC examples/vmd/led/led.o 00:02:14.682 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.682 CC test/event/app_repeat/app_repeat.o 00:02:14.682 LINK spdk_nvme_identify 00:02:14.682 CC examples/idxd/perf/perf.o 00:02:14.682 CC examples/thread/thread/thread_ex.o 00:02:14.682 LINK spdk_nvme_perf 00:02:14.682 LINK mem_callbacks 00:02:14.682 LINK reactor_perf 00:02:14.682 LINK reactor 00:02:14.682 LINK event_perf 00:02:14.682 LINK spdk_top 00:02:14.682 LINK lsvmd 00:02:14.682 CC app/vhost/vhost.o 00:02:14.682 LINK led 00:02:14.939 LINK app_repeat 00:02:14.939 LINK hello_sock 00:02:14.939 LINK scheduler 00:02:14.939 CC test/nvme/cuse/cuse.o 00:02:14.939 CC test/nvme/reset/reset.o 00:02:14.939 CC test/nvme/simple_copy/simple_copy.o 00:02:14.939 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.939 CC test/nvme/compliance/nvme_compliance.o 00:02:14.939 CC test/nvme/fdp/fdp.o 00:02:14.939 CC test/nvme/err_injection/err_injection.o 00:02:14.939 CC test/nvme/reserve/reserve.o 00:02:14.939 CC test/nvme/e2edp/nvme_dp.o 00:02:14.939 CC test/nvme/overhead/overhead.o 00:02:14.939 CC test/nvme/connect_stress/connect_stress.o 00:02:14.939 CC test/nvme/aer/aer.o 00:02:14.939 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.939 CC test/nvme/startup/startup.o 00:02:14.939 CC test/nvme/sgl/sgl.o 00:02:14.939 CC test/nvme/boot_partition/boot_partition.o 00:02:14.939 LINK thread 00:02:14.939 CC test/accel/dif/dif.o 00:02:14.939 CC test/blobfs/mkfs/mkfs.o 00:02:14.939 LINK idxd_perf 00:02:14.939 LINK vhost 00:02:14.939 CC test/lvol/esnap/esnap.o 00:02:14.939 LINK memory_ut 00:02:15.197 LINK startup 00:02:15.197 LINK err_injection 00:02:15.197 LINK doorbell_aers 00:02:15.197 LINK fused_ordering 00:02:15.197 LINK boot_partition 00:02:15.197 LINK connect_stress 00:02:15.197 LINK reserve 00:02:15.197 LINK simple_copy 00:02:15.197 LINK reset 00:02:15.197 LINK mkfs 00:02:15.197 LINK sgl 00:02:15.197 LINK nvme_dp 00:02:15.197 LINK overhead 00:02:15.197 LINK nvme_compliance 00:02:15.197 LINK aer 00:02:15.197 LINK fdp 00:02:15.197 CC examples/nvme/reconnect/reconnect.o 00:02:15.197 CC examples/nvme/hotplug/hotplug.o 00:02:15.197 CC examples/nvme/abort/abort.o 00:02:15.197 CC examples/nvme/hello_world/hello_world.o 00:02:15.197 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.197 CC examples/nvme/arbitration/arbitration.o 00:02:15.197 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.197 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.197 LINK dif 00:02:15.456 CC examples/accel/perf/accel_perf.o 00:02:15.456 CC examples/blob/cli/blobcli.o 00:02:15.456 LINK pmr_persistence 00:02:15.456 LINK cmb_copy 00:02:15.456 CC examples/blob/hello_world/hello_blob.o 00:02:15.456 LINK hello_world 00:02:15.456 LINK hotplug 00:02:15.456 LINK abort 00:02:15.456 LINK reconnect 00:02:15.456 LINK iscsi_fuzz 00:02:15.456 LINK arbitration 00:02:15.714 LINK nvme_manage 00:02:15.714 LINK hello_blob 00:02:15.714 LINK accel_perf 00:02:15.714 CC test/bdev/bdevio/bdevio.o 00:02:15.714 LINK blobcli 00:02:15.972 LINK cuse 00:02:16.230 LINK bdevio 00:02:16.230 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.230 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.489 LINK hello_bdev 00:02:16.748 LINK bdevperf 00:02:17.315 CC examples/nvmf/nvmf/nvmf.o 00:02:17.573 LINK nvmf 00:02:18.510 LINK esnap 00:02:18.768 00:02:18.768 real 0m43.111s 00:02:18.768 user 6m29.793s 00:02:18.768 sys 3m19.849s 00:02:18.768 21:40:12 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:18.768 21:40:12 make -- common/autotest_common.sh@10 -- $ set +x 00:02:18.768 ************************************ 00:02:18.768 END TEST make 00:02:18.768 ************************************ 00:02:18.768 21:40:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:18.768 21:40:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.768 21:40:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:18.768 21:40:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:18.768 21:40:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.768 21:40:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.768 21:40:12 -- pm/common@44 -- $ pid=3404964 00:02:18.768 21:40:12 -- pm/common@50 -- $ kill -TERM 3404964 00:02:18.768 21:40:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.768 21:40:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.768 21:40:12 -- pm/common@44 -- $ pid=3404966 00:02:18.768 21:40:12 -- pm/common@50 -- $ kill -TERM 3404966 00:02:18.768 21:40:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.768 21:40:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.768 21:40:12 -- pm/common@44 -- $ pid=3404967 00:02:18.768 21:40:12 -- pm/common@50 -- $ kill -TERM 3404967 00:02:18.768 21:40:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.768 21:40:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.768 21:40:12 -- pm/common@44 -- $ pid=3404990 00:02:18.768 21:40:12 -- pm/common@50 -- $ sudo -E kill -TERM 3404990 00:02:18.768 21:40:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.768 21:40:12 -- nvmf/common.sh@7 -- # uname -s 00:02:18.768 21:40:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.768 21:40:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.768 21:40:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.768 21:40:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.768 21:40:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:18.768 21:40:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:18.768 21:40:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.768 21:40:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:18.768 21:40:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.768 21:40:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:19.028 21:40:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.028 21:40:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.028 21:40:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:19.028 21:40:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:19.028 21:40:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:19.028 21:40:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:19.028 21:40:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.028 21:40:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:19.028 21:40:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.028 21:40:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.028 21:40:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.028 21:40:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.028 21:40:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.028 21:40:13 -- paths/export.sh@5 -- # export PATH 00:02:19.028 21:40:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.028 21:40:13 -- nvmf/common.sh@47 -- # : 0 00:02:19.028 21:40:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:19.028 21:40:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:19.028 21:40:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:19.028 21:40:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:19.028 21:40:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:19.028 21:40:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:19.028 21:40:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:19.028 21:40:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:19.028 21:40:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.028 21:40:13 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.028 21:40:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.028 21:40:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.028 21:40:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.028 21:40:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.028 21:40:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.028 21:40:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.028 21:40:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.028 21:40:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.028 21:40:13 -- spdk/autotest.sh@48 -- # udevadm_pid=3464164 00:02:19.028 21:40:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:19.028 21:40:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.028 21:40:13 -- pm/common@17 -- # local monitor 00:02:19.028 21:40:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.028 21:40:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.028 21:40:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.028 21:40:13 -- pm/common@21 -- # date +%s 00:02:19.028 21:40:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.028 21:40:13 -- pm/common@21 -- # date +%s 00:02:19.028 21:40:13 -- pm/common@25 -- # sleep 1 00:02:19.028 21:40:13 -- pm/common@21 -- # date +%s 00:02:19.028 21:40:13 -- pm/common@21 -- # date +%s 00:02:19.028 21:40:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721072413 00:02:19.028 21:40:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721072413 00:02:19.028 21:40:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721072413 00:02:19.028 21:40:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721072413 00:02:19.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721072413_collect-vmstat.pm.log 00:02:19.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721072413_collect-cpu-load.pm.log 00:02:19.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721072413_collect-cpu-temp.pm.log 00:02:19.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721072413_collect-bmc-pm.bmc.pm.log 00:02:19.963 21:40:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.963 21:40:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.963 21:40:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:19.963 21:40:14 -- common/autotest_common.sh@10 -- # set +x 00:02:19.963 21:40:14 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.963 21:40:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:19.963 21:40:14 -- common/autotest_common.sh@10 -- # set +x 00:02:19.963 21:40:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.963 21:40:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.963 21:40:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.963 21:40:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.963 21:40:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.963 21:40:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.963 21:40:14 -- common/autotest_common.sh@1455 -- # uname 00:02:19.963 21:40:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:19.963 21:40:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.963 21:40:14 -- common/autotest_common.sh@1475 -- # uname 00:02:19.963 21:40:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:19.963 21:40:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:19.963 21:40:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.963 21:40:14 -- spdk/autotest.sh@72 -- # hash lcov 00:02:19.963 21:40:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.963 21:40:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:19.963 --rc lcov_branch_coverage=1 00:02:19.963 --rc lcov_function_coverage=1 00:02:19.963 --rc genhtml_branch_coverage=1 00:02:19.963 --rc genhtml_function_coverage=1 00:02:19.963 --rc genhtml_legend=1 00:02:19.963 --rc geninfo_all_blocks=1 00:02:19.963 ' 00:02:19.963 21:40:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:19.963 --rc lcov_branch_coverage=1 00:02:19.963 --rc lcov_function_coverage=1 00:02:19.963 --rc genhtml_branch_coverage=1 00:02:19.963 --rc genhtml_function_coverage=1 00:02:19.963 --rc genhtml_legend=1 00:02:19.963 --rc geninfo_all_blocks=1 00:02:19.963 ' 00:02:19.963 21:40:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:19.963 --rc lcov_branch_coverage=1 00:02:19.963 --rc lcov_function_coverage=1 00:02:19.963 --rc genhtml_branch_coverage=1 00:02:19.963 --rc genhtml_function_coverage=1 00:02:19.963 --rc genhtml_legend=1 00:02:19.963 --rc geninfo_all_blocks=1 00:02:19.963 --no-external' 00:02:19.963 21:40:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:19.963 --rc lcov_branch_coverage=1 00:02:19.963 --rc lcov_function_coverage=1 00:02:19.963 --rc genhtml_branch_coverage=1 00:02:19.963 --rc genhtml_function_coverage=1 00:02:19.963 --rc genhtml_legend=1 00:02:19.963 --rc geninfo_all_blocks=1 00:02:19.963 --no-external' 00:02:19.963 21:40:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:19.963 lcov: LCOV version 1.14 00:02:19.963 21:40:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:21.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:21.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:21.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:21.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:21.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:21.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:21.600 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:21.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:21.601 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:21.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:21.601 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:21.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:21.601 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:21.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:21.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:34.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:34.066 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.312 21:40:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:46.312 21:40:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:46.312 21:40:38 -- common/autotest_common.sh@10 -- # set +x 00:02:46.312 21:40:38 -- spdk/autotest.sh@91 -- # rm -f 00:02:46.312 21:40:38 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.880 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:46.880 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.880 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.880 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.880 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.880 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.880 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.139 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.139 21:40:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:47.139 21:40:41 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:47.139 21:40:41 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:47.139 21:40:41 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:47.139 21:40:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:47.139 21:40:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:47.139 21:40:41 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:47.139 21:40:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.139 21:40:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:47.139 21:40:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:47.139 21:40:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:47.139 21:40:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:47.139 21:40:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:47.139 21:40:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:47.139 21:40:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:47.399 No valid GPT data, bailing 00:02:47.399 21:40:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.399 21:40:41 -- scripts/common.sh@391 -- # pt= 00:02:47.399 21:40:41 -- scripts/common.sh@392 -- # return 1 00:02:47.399 21:40:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:47.399 1+0 records in 00:02:47.399 1+0 records out 00:02:47.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00151438 s, 692 MB/s 00:02:47.399 21:40:41 -- spdk/autotest.sh@118 -- # sync 00:02:47.399 21:40:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:47.399 21:40:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:47.399 21:40:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:52.673 21:40:46 -- spdk/autotest.sh@124 -- # uname -s 00:02:52.673 21:40:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:52.673 21:40:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.673 21:40:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.673 21:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.673 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:02:52.673 ************************************ 00:02:52.673 START TEST setup.sh 00:02:52.673 ************************************ 00:02:52.673 21:40:46 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.673 * Looking for test storage... 00:02:52.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.673 21:40:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:52.673 21:40:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:52.673 21:40:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.673 21:40:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.673 21:40:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.673 21:40:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.673 ************************************ 00:02:52.673 START TEST acl 00:02:52.673 ************************************ 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.673 * Looking for test storage... 00:02:52.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.673 21:40:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:52.673 21:40:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:52.673 21:40:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.673 21:40:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.961 21:40:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:55.961 21:40:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:55.961 21:40:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.961 21:40:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:55.961 21:40:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.961 21:40:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.496 Hugepages 00:02:58.496 node hugesize free / total 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 00:02:58.496 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.496 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.497 21:40:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.497 21:40:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.497 21:40:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.497 21:40:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.497 ************************************ 00:02:58.497 START TEST denied 00:02:58.497 ************************************ 00:02:58.497 21:40:52 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:58.497 21:40:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:58.497 21:40:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.497 21:40:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:58.497 21:40:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.497 21:40:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.036 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:01.036 21:40:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:01.036 21:40:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:01.036 21:40:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.036 21:40:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:01.036 21:40:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:01.037 21:40:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.037 21:40:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.037 21:40:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:01.037 21:40:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.037 21:40:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.232 00:03:05.232 real 0m6.589s 00:03:05.232 user 0m2.061s 00:03:05.232 sys 0m3.855s 00:03:05.232 21:40:58 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.232 21:40:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:05.232 ************************************ 00:03:05.232 END TEST denied 00:03:05.232 ************************************ 00:03:05.232 21:40:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:05.232 21:40:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:05.232 21:40:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.232 21:40:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.232 21:40:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.232 ************************************ 00:03:05.232 START TEST allowed 00:03:05.232 ************************************ 00:03:05.232 21:40:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:05.232 21:40:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:05.232 21:40:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:05.232 21:40:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:05.232 21:40:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.232 21:40:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.520 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.520 21:41:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:08.520 21:41:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:08.520 21:41:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:08.520 21:41:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.520 21:41:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.835 00:03:11.835 real 0m6.305s 00:03:11.835 user 0m1.817s 00:03:11.835 sys 0m3.476s 00:03:11.835 21:41:05 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.835 21:41:05 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.835 ************************************ 00:03:11.835 END TEST allowed 00:03:11.835 ************************************ 00:03:11.835 21:41:05 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:11.835 00:03:11.835 real 0m18.854s 00:03:11.835 user 0m6.204s 00:03:11.835 sys 0m11.163s 00:03:11.835 21:41:05 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.835 21:41:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.835 ************************************ 00:03:11.835 END TEST acl 00:03:11.835 ************************************ 00:03:11.835 21:41:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:11.835 21:41:05 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.835 21:41:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.835 21:41:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.835 21:41:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.835 ************************************ 00:03:11.835 START TEST hugepages 00:03:11.835 ************************************ 00:03:11.835 21:41:05 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.835 * Looking for test storage... 00:03:11.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173423312 kB' 'MemAvailable: 176296612 kB' 'Buffers: 3896 kB' 'Cached: 10160460 kB' 'SwapCached: 0 kB' 'Active: 7179188 kB' 'Inactive: 3507524 kB' 'Active(anon): 6787180 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525672 kB' 'Mapped: 203936 kB' 'Shmem: 6264824 kB' 'KReclaimable: 236436 kB' 'Slab: 824452 kB' 'SReclaimable: 236436 kB' 'SUnreclaim: 588016 kB' 'KernelStack: 21104 kB' 'PageTables: 10568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8320424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315868 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.835 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.836 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.837 21:41:05 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.837 21:41:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.837 21:41:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.837 21:41:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.837 ************************************ 00:03:11.837 START TEST default_setup 00:03:11.837 ************************************ 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.837 21:41:05 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.374 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.374 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.943 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175568720 kB' 'MemAvailable: 178441752 kB' 'Buffers: 3896 kB' 'Cached: 10160564 kB' 'SwapCached: 0 kB' 'Active: 7194372 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802364 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540228 kB' 'Mapped: 203844 kB' 'Shmem: 6264928 kB' 'KReclaimable: 236404 kB' 'Slab: 822992 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586588 kB' 'KernelStack: 20560 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8334444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315292 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.943 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.944 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.945 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.208 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570508 kB' 'MemAvailable: 178443792 kB' 'Buffers: 3896 kB' 'Cached: 10160568 kB' 'SwapCached: 0 kB' 'Active: 7193548 kB' 'Inactive: 3507524 kB' 'Active(anon): 6801540 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539876 kB' 'Mapped: 203836 kB' 'Shmem: 6264932 kB' 'KReclaimable: 236404 kB' 'Slab: 822992 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586588 kB' 'KernelStack: 20528 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8334596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315276 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.209 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175566676 kB' 'MemAvailable: 178439960 kB' 'Buffers: 3896 kB' 'Cached: 10160592 kB' 'SwapCached: 0 kB' 'Active: 7198212 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806204 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544616 kB' 'Mapped: 204340 kB' 'Shmem: 6264956 kB' 'KReclaimable: 236404 kB' 'Slab: 823044 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586640 kB' 'KernelStack: 20560 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8349580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315276 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.210 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.211 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.212 nr_hugepages=1024 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.212 resv_hugepages=0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.212 surplus_hugepages=0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.212 anon_hugepages=0 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175562644 kB' 'MemAvailable: 178435928 kB' 'Buffers: 3896 kB' 'Cached: 10160616 kB' 'SwapCached: 0 kB' 'Active: 7194052 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802044 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540412 kB' 'Mapped: 203836 kB' 'Shmem: 6264980 kB' 'KReclaimable: 236404 kB' 'Slab: 823052 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586648 kB' 'KernelStack: 20560 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8335012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315260 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.212 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.213 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86143852 kB' 'MemUsed: 11518832 kB' 'SwapCached: 0 kB' 'Active: 4970648 kB' 'Inactive: 3335448 kB' 'Active(anon): 4813108 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148620 kB' 'Mapped: 74068 kB' 'AnonPages: 160684 kB' 'Shmem: 4655632 kB' 'KernelStack: 11672 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127132 kB' 'Slab: 400468 kB' 'SReclaimable: 127132 kB' 'SUnreclaim: 273336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.214 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.215 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.216 node0=1024 expecting 1024 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.216 00:03:15.216 real 0m3.679s 00:03:15.216 user 0m1.103s 00:03:15.216 sys 0m1.785s 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.216 21:41:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:15.216 ************************************ 00:03:15.216 END TEST default_setup 00:03:15.216 ************************************ 00:03:15.216 21:41:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.216 21:41:09 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.216 21:41:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.216 21:41:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.216 21:41:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.216 ************************************ 00:03:15.216 START TEST per_node_1G_alloc 00:03:15.216 ************************************ 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.216 21:41:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.756 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.756 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.756 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175551352 kB' 'MemAvailable: 178424636 kB' 'Buffers: 3896 kB' 'Cached: 10160704 kB' 'SwapCached: 0 kB' 'Active: 7192424 kB' 'Inactive: 3507524 kB' 'Active(anon): 6800416 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538080 kB' 'Mapped: 203928 kB' 'Shmem: 6265068 kB' 'KReclaimable: 236404 kB' 'Slab: 823236 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586832 kB' 'KernelStack: 20608 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8335472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.756 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.757 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552116 kB' 'MemAvailable: 178425400 kB' 'Buffers: 3896 kB' 'Cached: 10160708 kB' 'SwapCached: 0 kB' 'Active: 7191492 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537628 kB' 'Mapped: 203848 kB' 'Shmem: 6265072 kB' 'KReclaimable: 236404 kB' 'Slab: 823188 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586784 kB' 'KernelStack: 20592 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8335492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.758 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.759 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552564 kB' 'MemAvailable: 178425848 kB' 'Buffers: 3896 kB' 'Cached: 10160724 kB' 'SwapCached: 0 kB' 'Active: 7191512 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799504 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537624 kB' 'Mapped: 203848 kB' 'Shmem: 6265088 kB' 'KReclaimable: 236404 kB' 'Slab: 823188 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586784 kB' 'KernelStack: 20592 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8335516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.760 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.025 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.026 nr_hugepages=1024 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.026 resv_hugepages=0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.026 surplus_hugepages=0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.026 anon_hugepages=0 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175551888 kB' 'MemAvailable: 178425172 kB' 'Buffers: 3896 kB' 'Cached: 10160748 kB' 'SwapCached: 0 kB' 'Active: 7191540 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799532 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537620 kB' 'Mapped: 203848 kB' 'Shmem: 6265112 kB' 'KReclaimable: 236404 kB' 'Slab: 823188 kB' 'SReclaimable: 236404 kB' 'SUnreclaim: 586784 kB' 'KernelStack: 20592 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8335536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.026 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87184084 kB' 'MemUsed: 10478600 kB' 'SwapCached: 0 kB' 'Active: 4968160 kB' 'Inactive: 3335448 kB' 'Active(anon): 4810620 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148624 kB' 'Mapped: 74068 kB' 'AnonPages: 158108 kB' 'Shmem: 4655636 kB' 'KernelStack: 11672 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127132 kB' 'Slab: 400616 kB' 'SReclaimable: 127132 kB' 'SUnreclaim: 273484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.027 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88369296 kB' 'MemUsed: 5349172 kB' 'SwapCached: 0 kB' 'Active: 2223384 kB' 'Inactive: 172076 kB' 'Active(anon): 1988916 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2016044 kB' 'Mapped: 129780 kB' 'AnonPages: 379508 kB' 'Shmem: 1609500 kB' 'KernelStack: 8920 kB' 'PageTables: 5320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109272 kB' 'Slab: 422560 kB' 'SReclaimable: 109272 kB' 'SUnreclaim: 313288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.028 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.029 node0=512 expecting 512 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.029 node1=512 expecting 512 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.029 00:03:18.029 real 0m2.711s 00:03:18.029 user 0m1.081s 00:03:18.029 sys 0m1.613s 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.029 21:41:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.029 ************************************ 00:03:18.029 END TEST per_node_1G_alloc 00:03:18.029 ************************************ 00:03:18.029 21:41:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.029 21:41:12 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.029 21:41:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.029 21:41:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.029 21:41:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.029 ************************************ 00:03:18.029 START TEST even_2G_alloc 00:03:18.029 ************************************ 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.029 21:41:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.604 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.604 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.604 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175570832 kB' 'MemAvailable: 178444100 kB' 'Buffers: 3896 kB' 'Cached: 10160860 kB' 'SwapCached: 0 kB' 'Active: 7189436 kB' 'Inactive: 3507524 kB' 'Active(anon): 6797428 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535376 kB' 'Mapped: 202820 kB' 'Shmem: 6265224 kB' 'KReclaimable: 236372 kB' 'Slab: 822588 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586216 kB' 'KernelStack: 20512 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8326264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.604 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.605 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175572712 kB' 'MemAvailable: 178445980 kB' 'Buffers: 3896 kB' 'Cached: 10160868 kB' 'SwapCached: 0 kB' 'Active: 7190156 kB' 'Inactive: 3507524 kB' 'Active(anon): 6798148 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536184 kB' 'Mapped: 202812 kB' 'Shmem: 6265232 kB' 'KReclaimable: 236372 kB' 'Slab: 822596 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586224 kB' 'KernelStack: 20528 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8326908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.606 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.607 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.868 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175573144 kB' 'MemAvailable: 178446412 kB' 'Buffers: 3896 kB' 'Cached: 10160884 kB' 'SwapCached: 0 kB' 'Active: 7190180 kB' 'Inactive: 3507524 kB' 'Active(anon): 6798172 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536120 kB' 'Mapped: 202812 kB' 'Shmem: 6265248 kB' 'KReclaimable: 236372 kB' 'Slab: 822636 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586264 kB' 'KernelStack: 20640 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8326920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.869 nr_hugepages=1024 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.869 resv_hugepages=0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.869 surplus_hugepages=0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.869 anon_hugepages=0 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175572316 kB' 'MemAvailable: 178445584 kB' 'Buffers: 3896 kB' 'Cached: 10160908 kB' 'SwapCached: 0 kB' 'Active: 7190640 kB' 'Inactive: 3507524 kB' 'Active(anon): 6798632 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536608 kB' 'Mapped: 202828 kB' 'Shmem: 6265272 kB' 'KReclaimable: 236372 kB' 'Slab: 822636 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586264 kB' 'KernelStack: 20864 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8326944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.869 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.870 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87198856 kB' 'MemUsed: 10463828 kB' 'SwapCached: 0 kB' 'Active: 4969356 kB' 'Inactive: 3335448 kB' 'Active(anon): 4811816 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148708 kB' 'Mapped: 73784 kB' 'AnonPages: 159296 kB' 'Shmem: 4655720 kB' 'KernelStack: 11720 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 400312 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 273212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88371872 kB' 'MemUsed: 5346596 kB' 'SwapCached: 0 kB' 'Active: 2221336 kB' 'Inactive: 172076 kB' 'Active(anon): 1986868 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2016116 kB' 'Mapped: 129044 kB' 'AnonPages: 377348 kB' 'Shmem: 1609572 kB' 'KernelStack: 8952 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109272 kB' 'Slab: 422324 kB' 'SReclaimable: 109272 kB' 'SUnreclaim: 313052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.871 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.872 node0=512 expecting 512 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.872 node1=512 expecting 512 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.872 00:03:20.872 real 0m2.831s 00:03:20.872 user 0m1.127s 00:03:20.872 sys 0m1.697s 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.872 21:41:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.872 ************************************ 00:03:20.872 END TEST even_2G_alloc 00:03:20.872 ************************************ 00:03:20.872 21:41:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.872 21:41:15 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:20.872 21:41:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.872 21:41:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.872 21:41:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.872 ************************************ 00:03:20.872 START TEST odd_alloc 00:03:20.872 ************************************ 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.872 21:41:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.167 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.167 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.167 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.167 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:24.167 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175544308 kB' 'MemAvailable: 178417576 kB' 'Buffers: 3896 kB' 'Cached: 10161008 kB' 'SwapCached: 0 kB' 'Active: 7196392 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804384 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542636 kB' 'Mapped: 203360 kB' 'Shmem: 6265372 kB' 'KReclaimable: 236372 kB' 'Slab: 822668 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586296 kB' 'KernelStack: 20896 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8333172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315760 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.168 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175545648 kB' 'MemAvailable: 178418916 kB' 'Buffers: 3896 kB' 'Cached: 10161012 kB' 'SwapCached: 0 kB' 'Active: 7198112 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806104 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543912 kB' 'Mapped: 203580 kB' 'Shmem: 6265376 kB' 'KReclaimable: 236372 kB' 'Slab: 822668 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586296 kB' 'KernelStack: 20896 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8331932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315696 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.169 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.170 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.171 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175550440 kB' 'MemAvailable: 178423708 kB' 'Buffers: 3896 kB' 'Cached: 10161028 kB' 'SwapCached: 0 kB' 'Active: 7190536 kB' 'Inactive: 3507524 kB' 'Active(anon): 6798528 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536244 kB' 'Mapped: 202844 kB' 'Shmem: 6265392 kB' 'KReclaimable: 236372 kB' 'Slab: 822612 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586240 kB' 'KernelStack: 20432 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8324708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.172 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:24.173 nr_hugepages=1025 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.173 resv_hugepages=0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.173 surplus_hugepages=0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.173 anon_hugepages=0 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.173 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175550444 kB' 'MemAvailable: 178423712 kB' 'Buffers: 3896 kB' 'Cached: 10161048 kB' 'SwapCached: 0 kB' 'Active: 7190568 kB' 'Inactive: 3507524 kB' 'Active(anon): 6798560 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536332 kB' 'Mapped: 202820 kB' 'Shmem: 6265412 kB' 'KReclaimable: 236372 kB' 'Slab: 822612 kB' 'SReclaimable: 236372 kB' 'SUnreclaim: 586240 kB' 'KernelStack: 20544 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8324728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.174 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87189072 kB' 'MemUsed: 10473612 kB' 'SwapCached: 0 kB' 'Active: 4969460 kB' 'Inactive: 3335448 kB' 'Active(anon): 4811920 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148848 kB' 'Mapped: 73760 kB' 'AnonPages: 159224 kB' 'Shmem: 4655860 kB' 'KernelStack: 11688 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 400280 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 273180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.175 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.176 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88361444 kB' 'MemUsed: 5357024 kB' 'SwapCached: 0 kB' 'Active: 2220752 kB' 'Inactive: 172076 kB' 'Active(anon): 1986284 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2016120 kB' 'Mapped: 129060 kB' 'AnonPages: 376756 kB' 'Shmem: 1609576 kB' 'KernelStack: 8840 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109272 kB' 'Slab: 422332 kB' 'SReclaimable: 109272 kB' 'SUnreclaim: 313060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.177 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:24.178 node0=512 expecting 513 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:24.178 node1=513 expecting 512 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:24.178 00:03:24.178 real 0m2.966s 00:03:24.178 user 0m1.213s 00:03:24.178 sys 0m1.818s 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.178 21:41:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.178 ************************************ 00:03:24.178 END TEST odd_alloc 00:03:24.178 ************************************ 00:03:24.178 21:41:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.178 21:41:18 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:24.178 21:41:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.178 21:41:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.178 21:41:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.178 ************************************ 00:03:24.178 START TEST custom_alloc 00:03:24.178 ************************************ 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.178 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.179 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.179 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.179 21:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:24.179 21:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.179 21:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.083 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.083 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.083 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174517524 kB' 'MemAvailable: 177390776 kB' 'Buffers: 3896 kB' 'Cached: 10161168 kB' 'SwapCached: 0 kB' 'Active: 7191592 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799584 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537424 kB' 'Mapped: 202748 kB' 'Shmem: 6265532 kB' 'KReclaimable: 236340 kB' 'Slab: 822228 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585888 kB' 'KernelStack: 20880 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8328136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.348 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.349 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174516672 kB' 'MemAvailable: 177389924 kB' 'Buffers: 3896 kB' 'Cached: 10161172 kB' 'SwapCached: 0 kB' 'Active: 7192744 kB' 'Inactive: 3507524 kB' 'Active(anon): 6800736 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538532 kB' 'Mapped: 202816 kB' 'Shmem: 6265536 kB' 'KReclaimable: 236340 kB' 'Slab: 822156 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585816 kB' 'KernelStack: 20864 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8328156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.350 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174517592 kB' 'MemAvailable: 177390844 kB' 'Buffers: 3896 kB' 'Cached: 10161188 kB' 'SwapCached: 0 kB' 'Active: 7192384 kB' 'Inactive: 3507524 kB' 'Active(anon): 6800376 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538136 kB' 'Mapped: 202748 kB' 'Shmem: 6265552 kB' 'KReclaimable: 236340 kB' 'Slab: 822156 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585816 kB' 'KernelStack: 20704 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8328176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:26.351 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.352 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.353 nr_hugepages=1536 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.353 resv_hugepages=0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.353 surplus_hugepages=0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.353 anon_hugepages=0 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174516768 kB' 'MemAvailable: 177390020 kB' 'Buffers: 3896 kB' 'Cached: 10161212 kB' 'SwapCached: 0 kB' 'Active: 7191916 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799908 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537656 kB' 'Mapped: 202740 kB' 'Shmem: 6265576 kB' 'KReclaimable: 236340 kB' 'Slab: 822156 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585816 kB' 'KernelStack: 20768 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8327952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:26.353 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.354 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87186108 kB' 'MemUsed: 10476576 kB' 'SwapCached: 0 kB' 'Active: 4970444 kB' 'Inactive: 3335448 kB' 'Active(anon): 4812904 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148928 kB' 'Mapped: 73668 kB' 'AnonPages: 160116 kB' 'Shmem: 4655940 kB' 'KernelStack: 12088 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 400088 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 272988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.355 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87329260 kB' 'MemUsed: 6389208 kB' 'SwapCached: 0 kB' 'Active: 2222016 kB' 'Inactive: 172076 kB' 'Active(anon): 1987548 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2016216 kB' 'Mapped: 129072 kB' 'AnonPages: 378024 kB' 'Shmem: 1609672 kB' 'KernelStack: 8840 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109240 kB' 'Slab: 422068 kB' 'SReclaimable: 109240 kB' 'SUnreclaim: 312828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.357 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.618 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.619 node0=512 expecting 512 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.619 node1=1024 expecting 1024 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.619 00:03:26.619 real 0m2.521s 00:03:26.619 user 0m0.976s 00:03:26.619 sys 0m1.533s 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.619 21:41:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.619 ************************************ 00:03:26.619 END TEST custom_alloc 00:03:26.619 ************************************ 00:03:26.619 21:41:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.619 21:41:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.619 21:41:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.619 21:41:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.619 21:41:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.619 ************************************ 00:03:26.619 START TEST no_shrink_alloc 00:03:26.619 ************************************ 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.619 21:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.161 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.161 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.161 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.162 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.162 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.162 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.162 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.162 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175536044 kB' 'MemAvailable: 178409296 kB' 'Buffers: 3896 kB' 'Cached: 10161308 kB' 'SwapCached: 0 kB' 'Active: 7191896 kB' 'Inactive: 3507524 kB' 'Active(anon): 6799888 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537352 kB' 'Mapped: 202884 kB' 'Shmem: 6265672 kB' 'KReclaimable: 236340 kB' 'Slab: 821980 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585640 kB' 'KernelStack: 20496 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8328696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.164 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.165 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.167 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.168 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534860 kB' 'MemAvailable: 178408112 kB' 'Buffers: 3896 kB' 'Cached: 10161308 kB' 'SwapCached: 0 kB' 'Active: 7193692 kB' 'Inactive: 3507524 kB' 'Active(anon): 6801684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539120 kB' 'Mapped: 203384 kB' 'Shmem: 6265672 kB' 'KReclaimable: 236340 kB' 'Slab: 821980 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585640 kB' 'KernelStack: 20800 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8330072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:29.169 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.172 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.173 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.174 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.175 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175531128 kB' 'MemAvailable: 178404380 kB' 'Buffers: 3896 kB' 'Cached: 10161312 kB' 'SwapCached: 0 kB' 'Active: 7195196 kB' 'Inactive: 3507524 kB' 'Active(anon): 6803188 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540608 kB' 'Mapped: 203384 kB' 'Shmem: 6265676 kB' 'KReclaimable: 236340 kB' 'Slab: 822000 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585660 kB' 'KernelStack: 20672 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8330580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.176 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.177 nr_hugepages=1024 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.177 resv_hugepages=0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.177 surplus_hugepages=0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.177 anon_hugepages=0 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175527620 kB' 'MemAvailable: 178400872 kB' 'Buffers: 3896 kB' 'Cached: 10161316 kB' 'SwapCached: 0 kB' 'Active: 7198548 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806540 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543788 kB' 'Mapped: 203384 kB' 'Shmem: 6265680 kB' 'KReclaimable: 236340 kB' 'Slab: 822000 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585660 kB' 'KernelStack: 20560 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8334880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315744 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.177 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.178 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86139128 kB' 'MemUsed: 11523556 kB' 'SwapCached: 0 kB' 'Active: 4969120 kB' 'Inactive: 3335448 kB' 'Active(anon): 4811580 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148932 kB' 'Mapped: 73852 kB' 'AnonPages: 158680 kB' 'Shmem: 4655944 kB' 'KernelStack: 11784 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 399668 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 272568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.179 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.180 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.181 node0=1024 expecting 1024 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.181 21:41:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.733 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.733 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.733 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.733 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.733 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175518700 kB' 'MemAvailable: 178391952 kB' 'Buffers: 3896 kB' 'Cached: 10161432 kB' 'SwapCached: 0 kB' 'Active: 7192956 kB' 'Inactive: 3507524 kB' 'Active(anon): 6800948 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538092 kB' 'Mapped: 202880 kB' 'Shmem: 6265796 kB' 'KReclaimable: 236340 kB' 'Slab: 822016 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585676 kB' 'KernelStack: 20496 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8329204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.734 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175514784 kB' 'MemAvailable: 178388036 kB' 'Buffers: 3896 kB' 'Cached: 10161432 kB' 'SwapCached: 0 kB' 'Active: 7194904 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802896 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540080 kB' 'Mapped: 202896 kB' 'Shmem: 6265796 kB' 'KReclaimable: 236340 kB' 'Slab: 822048 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585708 kB' 'KernelStack: 20720 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8341188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.735 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.736 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175514608 kB' 'MemAvailable: 178387860 kB' 'Buffers: 3896 kB' 'Cached: 10161452 kB' 'SwapCached: 0 kB' 'Active: 7193552 kB' 'Inactive: 3507524 kB' 'Active(anon): 6801544 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538840 kB' 'Mapped: 202880 kB' 'Shmem: 6265816 kB' 'KReclaimable: 236340 kB' 'Slab: 822104 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585764 kB' 'KernelStack: 20656 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8328880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.737 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.738 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.739 nr_hugepages=1024 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.739 resv_hugepages=0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.739 surplus_hugepages=0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.739 anon_hugepages=0 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175513100 kB' 'MemAvailable: 178386352 kB' 'Buffers: 3896 kB' 'Cached: 10161472 kB' 'SwapCached: 0 kB' 'Active: 7192956 kB' 'Inactive: 3507524 kB' 'Active(anon): 6800948 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538236 kB' 'Mapped: 202888 kB' 'Shmem: 6265836 kB' 'KReclaimable: 236340 kB' 'Slab: 822104 kB' 'SReclaimable: 236340 kB' 'SUnreclaim: 585764 kB' 'KernelStack: 20704 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8328904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3029972 kB' 'DirectMap2M: 16572416 kB' 'DirectMap1G: 182452224 kB' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.739 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.740 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86123672 kB' 'MemUsed: 11539012 kB' 'SwapCached: 0 kB' 'Active: 4969544 kB' 'Inactive: 3335448 kB' 'Active(anon): 4812004 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8148932 kB' 'Mapped: 73788 kB' 'AnonPages: 159168 kB' 'Shmem: 4655944 kB' 'KernelStack: 11880 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 399728 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 272628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.741 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.742 node0=1024 expecting 1024 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.742 00:03:31.742 real 0m5.293s 00:03:31.742 user 0m2.030s 00:03:31.742 sys 0m3.282s 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.742 21:41:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.742 ************************************ 00:03:31.742 END TEST no_shrink_alloc 00:03:31.742 ************************************ 00:03:32.002 21:41:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.002 21:41:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.002 21:41:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.002 00:03:32.002 real 0m20.536s 00:03:32.002 user 0m7.740s 00:03:32.002 sys 0m12.089s 00:03:32.002 21:41:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.002 21:41:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.002 ************************************ 00:03:32.002 END TEST hugepages 00:03:32.002 ************************************ 00:03:32.002 21:41:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.002 21:41:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.002 21:41:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.002 21:41:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.002 21:41:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.002 ************************************ 00:03:32.002 START TEST driver 00:03:32.002 ************************************ 00:03:32.002 21:41:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.002 * Looking for test storage... 00:03:32.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.002 21:41:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:32.002 21:41:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.002 21:41:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.199 21:41:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.199 21:41:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.199 21:41:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.199 21:41:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.199 ************************************ 00:03:36.199 START TEST guess_driver 00:03:36.199 ************************************ 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.199 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.199 Looking for driver=vfio-pci 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.199 21:41:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.733 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.734 21:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.299 21:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.299 21:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.299 21:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.557 21:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:39.557 21:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:39.557 21:41:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.557 21:41:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.745 00:03:43.745 real 0m7.437s 00:03:43.745 user 0m2.122s 00:03:43.745 sys 0m3.779s 00:03:43.745 21:41:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.745 21:41:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.745 ************************************ 00:03:43.745 END TEST guess_driver 00:03:43.745 ************************************ 00:03:43.745 21:41:37 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:43.745 00:03:43.745 real 0m11.332s 00:03:43.745 user 0m3.253s 00:03:43.745 sys 0m5.740s 00:03:43.745 21:41:37 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.745 21:41:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.745 ************************************ 00:03:43.745 END TEST driver 00:03:43.745 ************************************ 00:03:43.745 21:41:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:43.745 21:41:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:43.745 21:41:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.745 21:41:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.745 21:41:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.745 ************************************ 00:03:43.745 START TEST devices 00:03:43.745 ************************************ 00:03:43.745 21:41:37 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:43.745 * Looking for test storage... 00:03:43.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.745 21:41:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:43.745 21:41:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:43.745 21:41:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.745 21:41:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:46.281 21:41:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:46.281 No valid GPT data, bailing 00:03:46.281 21:41:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.281 21:41:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:46.281 21:41:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.281 21:41:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.281 ************************************ 00:03:46.281 START TEST nvme_mount 00:03:46.281 ************************************ 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.281 21:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:47.219 Creating new GPT entries in memory. 00:03:47.219 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.219 other utilities. 00:03:47.219 21:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.219 21:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.219 21:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.219 21:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.219 21:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.157 Creating new GPT entries in memory. 00:03:48.157 The operation has completed successfully. 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3495398 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:48.157 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.417 21:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.954 21:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.954 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.954 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:51.214 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:51.214 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:51.214 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:51.214 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.214 21:41:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.778 21:41:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.778 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.778 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.778 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.779 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.779 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:53.779 21:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.779 21:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.779 21:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.315 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.315 00:03:56.315 real 0m10.221s 00:03:56.315 user 0m2.859s 00:03:56.315 sys 0m5.088s 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.315 21:41:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:56.315 ************************************ 00:03:56.315 END TEST nvme_mount 00:03:56.315 ************************************ 00:03:56.315 21:41:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:56.315 21:41:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.315 21:41:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.315 21:41:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.315 21:41:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.574 ************************************ 00:03:56.574 START TEST dm_mount 00:03:56.574 ************************************ 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.574 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.575 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.575 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.575 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.575 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.575 21:41:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:57.513 Creating new GPT entries in memory. 00:03:57.513 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.513 other utilities. 00:03:57.513 21:41:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.513 21:41:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.513 21:41:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.513 21:41:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.513 21:41:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.451 Creating new GPT entries in memory. 00:03:58.451 The operation has completed successfully. 00:03:58.451 21:41:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.451 21:41:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.451 21:41:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.451 21:41:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.451 21:41:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:59.831 The operation has completed successfully. 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3499574 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.831 21:41:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.366 21:41:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.906 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.907 21:41:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:04.907 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:04.907 21:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:05.166 00:04:05.166 real 0m8.568s 00:04:05.166 user 0m2.058s 00:04:05.166 sys 0m3.497s 00:04:05.166 21:41:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.166 21:41:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:05.166 ************************************ 00:04:05.166 END TEST dm_mount 00:04:05.166 ************************************ 00:04:05.166 21:41:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.166 21:41:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.428 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:05.428 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:05.428 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.428 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.428 21:41:59 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:05.428 00:04:05.428 real 0m22.000s 00:04:05.428 user 0m6.006s 00:04:05.428 sys 0m10.513s 00:04:05.428 21:41:59 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.428 21:41:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.428 ************************************ 00:04:05.428 END TEST devices 00:04:05.428 ************************************ 00:04:05.428 21:41:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.428 00:04:05.428 real 1m13.087s 00:04:05.428 user 0m23.343s 00:04:05.428 sys 0m39.758s 00:04:05.428 21:41:59 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.428 21:41:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.428 ************************************ 00:04:05.428 END TEST setup.sh 00:04:05.428 ************************************ 00:04:05.428 21:41:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.428 21:41:59 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:07.968 Hugepages 00:04:07.968 node hugesize free / total 00:04:07.968 node0 1048576kB 0 / 0 00:04:07.968 node0 2048kB 2048 / 2048 00:04:07.968 node1 1048576kB 0 / 0 00:04:07.968 node1 2048kB 0 / 0 00:04:07.968 00:04:07.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.968 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:07.968 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:07.968 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:07.968 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:07.968 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:07.968 21:42:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:07.968 21:42:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:07.968 21:42:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:07.968 21:42:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.506 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.506 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.075 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.335 21:42:05 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:12.275 21:42:06 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:12.275 21:42:06 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:12.275 21:42:06 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.275 21:42:06 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:12.275 21:42:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.275 21:42:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.275 21:42:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.275 21:42:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.275 21:42:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.275 21:42:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:12.275 21:42:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:12.275 21:42:06 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.874 Waiting for block devices as requested 00:04:14.874 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:15.133 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.134 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.134 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.134 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.393 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.393 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.393 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:15.393 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:15.653 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.653 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.653 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.912 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.912 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.912 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.172 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.172 21:42:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:16.172 21:42:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:16.172 21:42:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:16.172 21:42:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:16.172 21:42:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:16.172 21:42:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:16.172 21:42:10 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:16.172 21:42:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:16.172 21:42:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:16.172 21:42:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:16.172 21:42:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:16.172 21:42:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:16.172 21:42:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:16.172 21:42:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:16.172 21:42:10 -- common/autotest_common.sh@1557 -- # continue 00:04:16.172 21:42:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:16.172 21:42:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:16.172 21:42:10 -- common/autotest_common.sh@10 -- # set +x 00:04:16.172 21:42:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:16.172 21:42:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.172 21:42:10 -- common/autotest_common.sh@10 -- # set +x 00:04:16.172 21:42:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.708 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.708 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.280 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.538 21:42:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:19.538 21:42:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.538 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.538 21:42:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:19.538 21:42:13 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:19.538 21:42:13 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.538 21:42:13 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:19.538 21:42:13 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:19.538 21:42:13 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:19.538 21:42:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:19.538 21:42:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:19.538 21:42:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.538 21:42:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.538 21:42:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:19.538 21:42:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:19.538 21:42:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:19.538 21:42:13 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:19.538 21:42:13 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:19.538 21:42:13 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:19.538 21:42:13 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:19.538 21:42:13 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:19.538 21:42:13 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:19.538 21:42:13 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:19.538 21:42:13 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3508745 00:04:19.538 21:42:13 -- common/autotest_common.sh@1598 -- # waitforlisten 3508745 00:04:19.538 21:42:13 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.538 21:42:13 -- common/autotest_common.sh@829 -- # '[' -z 3508745 ']' 00:04:19.538 21:42:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.538 21:42:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.538 21:42:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.538 21:42:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.538 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.538 [2024-07-15 21:42:13.726748] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:19.538 [2024-07-15 21:42:13.726793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508745 ] 00:04:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.796 [2024-07-15 21:42:13.783019] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.796 [2024-07-15 21:42:13.859317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.363 21:42:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.363 21:42:14 -- common/autotest_common.sh@862 -- # return 0 00:04:20.363 21:42:14 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:20.363 21:42:14 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:20.363 21:42:14 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:23.650 nvme0n1 00:04:23.650 21:42:17 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:23.651 [2024-07-15 21:42:17.659829] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:23.651 request: 00:04:23.651 { 00:04:23.651 "nvme_ctrlr_name": "nvme0", 00:04:23.651 "password": "test", 00:04:23.651 "method": "bdev_nvme_opal_revert", 00:04:23.651 "req_id": 1 00:04:23.651 } 00:04:23.651 Got JSON-RPC error response 00:04:23.651 response: 00:04:23.651 { 00:04:23.651 "code": -32602, 00:04:23.651 "message": "Invalid parameters" 00:04:23.651 } 00:04:23.651 21:42:17 -- common/autotest_common.sh@1604 -- # true 00:04:23.651 21:42:17 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:23.651 21:42:17 -- common/autotest_common.sh@1608 -- # killprocess 3508745 00:04:23.651 21:42:17 -- common/autotest_common.sh@948 -- # '[' -z 3508745 ']' 00:04:23.651 21:42:17 -- common/autotest_common.sh@952 -- # kill -0 3508745 00:04:23.651 21:42:17 -- common/autotest_common.sh@953 -- # uname 00:04:23.651 21:42:17 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.651 21:42:17 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3508745 00:04:23.651 21:42:17 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.651 21:42:17 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.651 21:42:17 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3508745' 00:04:23.651 killing process with pid 3508745 00:04:23.651 21:42:17 -- common/autotest_common.sh@967 -- # kill 3508745 00:04:23.651 21:42:17 -- common/autotest_common.sh@972 -- # wait 3508745 00:04:25.558 21:42:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:25.558 21:42:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:25.558 21:42:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.558 21:42:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.558 21:42:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:25.558 21:42:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.558 21:42:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.558 21:42:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:25.558 21:42:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.558 21:42:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.558 21:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.558 21:42:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.558 ************************************ 00:04:25.558 START TEST env 00:04:25.558 ************************************ 00:04:25.558 21:42:19 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.558 * Looking for test storage... 00:04:25.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:25.558 21:42:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.558 21:42:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.558 21:42:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.558 21:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.558 ************************************ 00:04:25.558 START TEST env_memory 00:04:25.558 ************************************ 00:04:25.558 21:42:19 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.558 00:04:25.558 00:04:25.558 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.558 http://cunit.sourceforge.net/ 00:04:25.558 00:04:25.558 00:04:25.558 Suite: memory 00:04:25.558 Test: alloc and free memory map ...[2024-07-15 21:42:19.518768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.558 passed 00:04:25.558 Test: mem map translation ...[2024-07-15 21:42:19.536750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.558 [2024-07-15 21:42:19.536765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.559 [2024-07-15 21:42:19.536800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.559 [2024-07-15 21:42:19.536810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.559 passed 00:04:25.559 Test: mem map registration ...[2024-07-15 21:42:19.573343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.559 [2024-07-15 21:42:19.573357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.559 passed 00:04:25.559 Test: mem map adjacent registrations ...passed 00:04:25.559 00:04:25.559 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.559 suites 1 1 n/a 0 0 00:04:25.559 tests 4 4 4 0 0 00:04:25.559 asserts 152 152 152 0 n/a 00:04:25.559 00:04:25.559 Elapsed time = 0.131 seconds 00:04:25.559 00:04:25.559 real 0m0.139s 00:04:25.559 user 0m0.129s 00:04:25.559 sys 0m0.010s 00:04:25.559 21:42:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.559 21:42:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:25.559 ************************************ 00:04:25.559 END TEST env_memory 00:04:25.559 ************************************ 00:04:25.559 21:42:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:25.559 21:42:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.559 21:42:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.559 21:42:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.559 21:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.559 ************************************ 00:04:25.559 START TEST env_vtophys 00:04:25.559 ************************************ 00:04:25.559 21:42:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.559 EAL: lib.eal log level changed from notice to debug 00:04:25.559 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.559 EAL: Detected lcore 1 as core 1 on socket 0 00:04:25.559 EAL: Detected lcore 2 as core 2 on socket 0 00:04:25.559 EAL: Detected lcore 3 as core 3 on socket 0 00:04:25.559 EAL: Detected lcore 4 as core 4 on socket 0 00:04:25.559 EAL: Detected lcore 5 as core 5 on socket 0 00:04:25.559 EAL: Detected lcore 6 as core 6 on socket 0 00:04:25.559 EAL: Detected lcore 7 as core 8 on socket 0 00:04:25.559 EAL: Detected lcore 8 as core 9 on socket 0 00:04:25.559 EAL: Detected lcore 9 as core 10 on socket 0 00:04:25.559 EAL: Detected lcore 10 as core 11 on socket 0 00:04:25.559 EAL: Detected lcore 11 as core 12 on socket 0 00:04:25.559 EAL: Detected lcore 12 as core 13 on socket 0 00:04:25.559 EAL: Detected lcore 13 as core 16 on socket 0 00:04:25.559 EAL: Detected lcore 14 as core 17 on socket 0 00:04:25.559 EAL: Detected lcore 15 as core 18 on socket 0 00:04:25.559 EAL: Detected lcore 16 as core 19 on socket 0 00:04:25.559 EAL: Detected lcore 17 as core 20 on socket 0 00:04:25.559 EAL: Detected lcore 18 as core 21 on socket 0 00:04:25.559 EAL: Detected lcore 19 as core 25 on socket 0 00:04:25.559 EAL: Detected lcore 20 as core 26 on socket 0 00:04:25.559 EAL: Detected lcore 21 as core 27 on socket 0 00:04:25.559 EAL: Detected lcore 22 as core 28 on socket 0 00:04:25.559 EAL: Detected lcore 23 as core 29 on socket 0 00:04:25.559 EAL: Detected lcore 24 as core 0 on socket 1 00:04:25.559 EAL: Detected lcore 25 as core 1 on socket 1 00:04:25.559 EAL: Detected lcore 26 as core 2 on socket 1 00:04:25.559 EAL: Detected lcore 27 as core 3 on socket 1 00:04:25.559 EAL: Detected lcore 28 as core 4 on socket 1 00:04:25.559 EAL: Detected lcore 29 as core 5 on socket 1 00:04:25.559 EAL: Detected lcore 30 as core 6 on socket 1 00:04:25.559 EAL: Detected lcore 31 as core 9 on socket 1 00:04:25.559 EAL: Detected lcore 32 as core 10 on socket 1 00:04:25.559 EAL: Detected lcore 33 as core 11 on socket 1 00:04:25.559 EAL: Detected lcore 34 as core 12 on socket 1 00:04:25.559 EAL: Detected lcore 35 as core 13 on socket 1 00:04:25.559 EAL: Detected lcore 36 as core 16 on socket 1 00:04:25.559 EAL: Detected lcore 37 as core 17 on socket 1 00:04:25.559 EAL: Detected lcore 38 as core 18 on socket 1 00:04:25.559 EAL: Detected lcore 39 as core 19 on socket 1 00:04:25.559 EAL: Detected lcore 40 as core 20 on socket 1 00:04:25.559 EAL: Detected lcore 41 as core 21 on socket 1 00:04:25.559 EAL: Detected lcore 42 as core 24 on socket 1 00:04:25.559 EAL: Detected lcore 43 as core 25 on socket 1 00:04:25.559 EAL: Detected lcore 44 as core 26 on socket 1 00:04:25.559 EAL: Detected lcore 45 as core 27 on socket 1 00:04:25.559 EAL: Detected lcore 46 as core 28 on socket 1 00:04:25.559 EAL: Detected lcore 47 as core 29 on socket 1 00:04:25.559 EAL: Detected lcore 48 as core 0 on socket 0 00:04:25.559 EAL: Detected lcore 49 as core 1 on socket 0 00:04:25.559 EAL: Detected lcore 50 as core 2 on socket 0 00:04:25.559 EAL: Detected lcore 51 as core 3 on socket 0 00:04:25.559 EAL: Detected lcore 52 as core 4 on socket 0 00:04:25.559 EAL: Detected lcore 53 as core 5 on socket 0 00:04:25.559 EAL: Detected lcore 54 as core 6 on socket 0 00:04:25.559 EAL: Detected lcore 55 as core 8 on socket 0 00:04:25.559 EAL: Detected lcore 56 as core 9 on socket 0 00:04:25.559 EAL: Detected lcore 57 as core 10 on socket 0 00:04:25.559 EAL: Detected lcore 58 as core 11 on socket 0 00:04:25.559 EAL: Detected lcore 59 as core 12 on socket 0 00:04:25.559 EAL: Detected lcore 60 as core 13 on socket 0 00:04:25.559 EAL: Detected lcore 61 as core 16 on socket 0 00:04:25.559 EAL: Detected lcore 62 as core 17 on socket 0 00:04:25.559 EAL: Detected lcore 63 as core 18 on socket 0 00:04:25.559 EAL: Detected lcore 64 as core 19 on socket 0 00:04:25.559 EAL: Detected lcore 65 as core 20 on socket 0 00:04:25.559 EAL: Detected lcore 66 as core 21 on socket 0 00:04:25.559 EAL: Detected lcore 67 as core 25 on socket 0 00:04:25.559 EAL: Detected lcore 68 as core 26 on socket 0 00:04:25.559 EAL: Detected lcore 69 as core 27 on socket 0 00:04:25.559 EAL: Detected lcore 70 as core 28 on socket 0 00:04:25.559 EAL: Detected lcore 71 as core 29 on socket 0 00:04:25.559 EAL: Detected lcore 72 as core 0 on socket 1 00:04:25.559 EAL: Detected lcore 73 as core 1 on socket 1 00:04:25.559 EAL: Detected lcore 74 as core 2 on socket 1 00:04:25.559 EAL: Detected lcore 75 as core 3 on socket 1 00:04:25.559 EAL: Detected lcore 76 as core 4 on socket 1 00:04:25.559 EAL: Detected lcore 77 as core 5 on socket 1 00:04:25.559 EAL: Detected lcore 78 as core 6 on socket 1 00:04:25.559 EAL: Detected lcore 79 as core 9 on socket 1 00:04:25.559 EAL: Detected lcore 80 as core 10 on socket 1 00:04:25.559 EAL: Detected lcore 81 as core 11 on socket 1 00:04:25.559 EAL: Detected lcore 82 as core 12 on socket 1 00:04:25.559 EAL: Detected lcore 83 as core 13 on socket 1 00:04:25.559 EAL: Detected lcore 84 as core 16 on socket 1 00:04:25.559 EAL: Detected lcore 85 as core 17 on socket 1 00:04:25.559 EAL: Detected lcore 86 as core 18 on socket 1 00:04:25.559 EAL: Detected lcore 87 as core 19 on socket 1 00:04:25.559 EAL: Detected lcore 88 as core 20 on socket 1 00:04:25.559 EAL: Detected lcore 89 as core 21 on socket 1 00:04:25.559 EAL: Detected lcore 90 as core 24 on socket 1 00:04:25.559 EAL: Detected lcore 91 as core 25 on socket 1 00:04:25.559 EAL: Detected lcore 92 as core 26 on socket 1 00:04:25.559 EAL: Detected lcore 93 as core 27 on socket 1 00:04:25.559 EAL: Detected lcore 94 as core 28 on socket 1 00:04:25.559 EAL: Detected lcore 95 as core 29 on socket 1 00:04:25.559 EAL: Maximum logical cores by configuration: 128 00:04:25.559 EAL: Detected CPU lcores: 96 00:04:25.559 EAL: Detected NUMA nodes: 2 00:04:25.559 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:25.559 EAL: Detected shared linkage of DPDK 00:04:25.559 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.559 EAL: Bus pci wants IOVA as 'DC' 00:04:25.559 EAL: Buses did not request a specific IOVA mode. 00:04:25.559 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:25.559 EAL: Selected IOVA mode 'VA' 00:04:25.559 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.559 EAL: Probing VFIO support... 00:04:25.559 EAL: IOMMU type 1 (Type 1) is supported 00:04:25.559 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:25.559 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:25.559 EAL: VFIO support initialized 00:04:25.559 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.559 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.559 EAL: Setting up physically contiguous memory... 00:04:25.559 EAL: Setting maximum number of open files to 524288 00:04:25.559 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.559 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:25.559 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.559 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.559 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:25.559 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.559 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:25.559 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:25.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.560 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:25.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.560 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:25.560 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:25.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.560 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:25.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.560 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:25.560 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:25.560 EAL: Hugepages will be freed exactly as allocated. 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: TSC frequency is ~2300000 KHz 00:04:25.560 EAL: Main lcore 0 is ready (tid=7fcae1102a00;cpuset=[0]) 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 0 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.560 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.560 00:04:25.560 00:04:25.560 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.560 http://cunit.sourceforge.net/ 00:04:25.560 00:04:25.560 00:04:25.560 Suite: components_suite 00:04:25.560 Test: vtophys_malloc_test ...passed 00:04:25.560 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.560 EAL: Trying to obtain current memory policy. 00:04:25.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.560 EAL: Restoring previous memory policy: 4 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.560 EAL: request: mp_malloc_sync 00:04:25.560 EAL: No shared files mode enabled, IPC is disabled 00:04:25.560 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.819 EAL: request: mp_malloc_sync 00:04:25.819 EAL: No shared files mode enabled, IPC is disabled 00:04:25.819 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.819 EAL: Trying to obtain current memory policy. 00:04:25.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.819 EAL: Restoring previous memory policy: 4 00:04:25.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.819 EAL: request: mp_malloc_sync 00:04:25.819 EAL: No shared files mode enabled, IPC is disabled 00:04:25.819 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.819 EAL: request: mp_malloc_sync 00:04:25.819 EAL: No shared files mode enabled, IPC is disabled 00:04:25.819 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.819 EAL: Trying to obtain current memory policy. 00:04:25.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.819 EAL: Restoring previous memory policy: 4 00:04:25.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.819 EAL: request: mp_malloc_sync 00:04:25.819 EAL: No shared files mode enabled, IPC is disabled 00:04:25.819 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.819 EAL: request: mp_malloc_sync 00:04:25.819 EAL: No shared files mode enabled, IPC is disabled 00:04:25.819 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.819 EAL: Trying to obtain current memory policy. 00:04:25.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.078 EAL: Restoring previous memory policy: 4 00:04:26.078 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.078 EAL: request: mp_malloc_sync 00:04:26.078 EAL: No shared files mode enabled, IPC is disabled 00:04:26.078 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.078 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.078 EAL: request: mp_malloc_sync 00:04:26.078 EAL: No shared files mode enabled, IPC is disabled 00:04:26.078 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.078 EAL: Trying to obtain current memory policy. 00:04:26.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.335 EAL: Restoring previous memory policy: 4 00:04:26.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.335 EAL: request: mp_malloc_sync 00:04:26.335 EAL: No shared files mode enabled, IPC is disabled 00:04:26.335 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.594 EAL: request: mp_malloc_sync 00:04:26.594 EAL: No shared files mode enabled, IPC is disabled 00:04:26.594 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.594 passed 00:04:26.594 00:04:26.594 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.594 suites 1 1 n/a 0 0 00:04:26.594 tests 2 2 2 0 0 00:04:26.594 asserts 497 497 497 0 n/a 00:04:26.594 00:04:26.594 Elapsed time = 0.972 seconds 00:04:26.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.594 EAL: request: mp_malloc_sync 00:04:26.594 EAL: No shared files mode enabled, IPC is disabled 00:04:26.594 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.594 EAL: No shared files mode enabled, IPC is disabled 00:04:26.594 EAL: No shared files mode enabled, IPC is disabled 00:04:26.594 EAL: No shared files mode enabled, IPC is disabled 00:04:26.594 00:04:26.594 real 0m1.095s 00:04:26.594 user 0m0.633s 00:04:26.594 sys 0m0.422s 00:04:26.594 21:42:20 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.594 21:42:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:26.594 ************************************ 00:04:26.594 END TEST env_vtophys 00:04:26.594 ************************************ 00:04:26.594 21:42:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:26.594 21:42:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.594 21:42:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.594 21:42:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.594 21:42:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.594 ************************************ 00:04:26.594 START TEST env_pci 00:04:26.594 ************************************ 00:04:26.594 21:42:20 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.854 00:04:26.854 00:04:26.854 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.854 http://cunit.sourceforge.net/ 00:04:26.854 00:04:26.854 00:04:26.854 Suite: pci 00:04:26.854 Test: pci_hook ...[2024-07-15 21:42:20.846322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3510175 has claimed it 00:04:26.854 EAL: Cannot find device (10000:00:01.0) 00:04:26.854 EAL: Failed to attach device on primary process 00:04:26.854 passed 00:04:26.854 00:04:26.854 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.854 suites 1 1 n/a 0 0 00:04:26.854 tests 1 1 1 0 0 00:04:26.854 asserts 25 25 25 0 n/a 00:04:26.854 00:04:26.854 Elapsed time = 0.026 seconds 00:04:26.854 00:04:26.854 real 0m0.045s 00:04:26.854 user 0m0.016s 00:04:26.854 sys 0m0.028s 00:04:26.854 21:42:20 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.854 21:42:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:26.854 ************************************ 00:04:26.854 END TEST env_pci 00:04:26.854 ************************************ 00:04:26.854 21:42:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:26.854 21:42:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.854 21:42:20 env -- env/env.sh@15 -- # uname 00:04:26.854 21:42:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.854 21:42:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.854 21:42:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.854 21:42:20 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:26.854 21:42:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.854 21:42:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.854 ************************************ 00:04:26.854 START TEST env_dpdk_post_init 00:04:26.854 ************************************ 00:04:26.854 21:42:20 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.854 EAL: Detected CPU lcores: 96 00:04:26.854 EAL: Detected NUMA nodes: 2 00:04:26.854 EAL: Detected shared linkage of DPDK 00:04:26.854 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.854 EAL: Selected IOVA mode 'VA' 00:04:26.854 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.854 EAL: VFIO support initialized 00:04:26.854 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.854 EAL: Using IOMMU type 1 (Type 1) 00:04:26.854 EAL: Ignore mapping IO port bar(1) 00:04:26.854 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:26.854 EAL: Ignore mapping IO port bar(1) 00:04:26.854 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:26.854 EAL: Ignore mapping IO port bar(1) 00:04:26.854 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:26.854 EAL: Ignore mapping IO port bar(1) 00:04:26.854 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:27.113 EAL: Ignore mapping IO port bar(1) 00:04:27.113 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:27.113 EAL: Ignore mapping IO port bar(1) 00:04:27.113 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:27.113 EAL: Ignore mapping IO port bar(1) 00:04:27.113 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:27.113 EAL: Ignore mapping IO port bar(1) 00:04:27.113 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:27.681 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:27.681 EAL: Ignore mapping IO port bar(1) 00:04:27.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:27.681 EAL: Ignore mapping IO port bar(1) 00:04:27.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:27.681 EAL: Ignore mapping IO port bar(1) 00:04:27.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:31.227 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:31.227 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:31.227 Starting DPDK initialization... 00:04:31.227 Starting SPDK post initialization... 00:04:31.227 SPDK NVMe probe 00:04:31.227 Attaching to 0000:5e:00.0 00:04:31.227 Attached to 0000:5e:00.0 00:04:31.227 Cleaning up... 00:04:31.227 00:04:31.227 real 0m4.315s 00:04:31.227 user 0m3.276s 00:04:31.227 sys 0m0.106s 00:04:31.227 21:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.227 21:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 ************************************ 00:04:31.227 END TEST env_dpdk_post_init 00:04:31.227 ************************************ 00:04:31.227 21:42:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.227 21:42:25 env -- env/env.sh@26 -- # uname 00:04:31.227 21:42:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.227 21:42:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.227 21:42:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.227 21:42:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.227 21:42:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 ************************************ 00:04:31.227 START TEST env_mem_callbacks 00:04:31.227 ************************************ 00:04:31.227 21:42:25 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.227 EAL: Detected CPU lcores: 96 00:04:31.227 EAL: Detected NUMA nodes: 2 00:04:31.227 EAL: Detected shared linkage of DPDK 00:04:31.227 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.227 EAL: Selected IOVA mode 'VA' 00:04:31.227 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.227 EAL: VFIO support initialized 00:04:31.227 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.227 00:04:31.227 00:04:31.227 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.227 http://cunit.sourceforge.net/ 00:04:31.227 00:04:31.227 00:04:31.227 Suite: memory 00:04:31.227 Test: test ... 00:04:31.227 register 0x200000200000 2097152 00:04:31.227 malloc 3145728 00:04:31.227 register 0x200000400000 4194304 00:04:31.227 buf 0x200000500000 len 3145728 PASSED 00:04:31.227 malloc 64 00:04:31.227 buf 0x2000004fff40 len 64 PASSED 00:04:31.227 malloc 4194304 00:04:31.227 register 0x200000800000 6291456 00:04:31.227 buf 0x200000a00000 len 4194304 PASSED 00:04:31.227 free 0x200000500000 3145728 00:04:31.227 free 0x2000004fff40 64 00:04:31.227 unregister 0x200000400000 4194304 PASSED 00:04:31.227 free 0x200000a00000 4194304 00:04:31.227 unregister 0x200000800000 6291456 PASSED 00:04:31.227 malloc 8388608 00:04:31.227 register 0x200000400000 10485760 00:04:31.227 buf 0x200000600000 len 8388608 PASSED 00:04:31.227 free 0x200000600000 8388608 00:04:31.227 unregister 0x200000400000 10485760 PASSED 00:04:31.227 passed 00:04:31.227 00:04:31.227 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.227 suites 1 1 n/a 0 0 00:04:31.227 tests 1 1 1 0 0 00:04:31.227 asserts 15 15 15 0 n/a 00:04:31.227 00:04:31.227 Elapsed time = 0.005 seconds 00:04:31.227 00:04:31.227 real 0m0.052s 00:04:31.227 user 0m0.018s 00:04:31.227 sys 0m0.033s 00:04:31.227 21:42:25 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.227 21:42:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 ************************************ 00:04:31.227 END TEST env_mem_callbacks 00:04:31.227 ************************************ 00:04:31.227 21:42:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.227 00:04:31.227 real 0m6.027s 00:04:31.227 user 0m4.229s 00:04:31.227 sys 0m0.853s 00:04:31.227 21:42:25 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.227 21:42:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 ************************************ 00:04:31.227 END TEST env 00:04:31.227 ************************************ 00:04:31.227 21:42:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.227 21:42:25 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:31.227 21:42:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.227 21:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.227 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 ************************************ 00:04:31.227 START TEST rpc 00:04:31.228 ************************************ 00:04:31.228 21:42:25 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:31.487 * Looking for test storage... 00:04:31.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.487 21:42:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3510999 00:04:31.487 21:42:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:31.487 21:42:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.487 21:42:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3510999 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@829 -- # '[' -z 3510999 ']' 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.487 21:42:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.487 [2024-07-15 21:42:25.599648] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:31.487 [2024-07-15 21:42:25.599697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510999 ] 00:04:31.487 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.487 [2024-07-15 21:42:25.654414] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.487 [2024-07-15 21:42:25.727059] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:31.487 [2024-07-15 21:42:25.727100] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3510999' to capture a snapshot of events at runtime. 00:04:31.487 [2024-07-15 21:42:25.727107] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:31.487 [2024-07-15 21:42:25.727113] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:31.487 [2024-07-15 21:42:25.727118] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3510999 for offline analysis/debug. 00:04:31.487 [2024-07-15 21:42:25.727155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.505 21:42:26 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.505 21:42:26 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:32.505 21:42:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.505 21:42:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.505 21:42:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:32.505 21:42:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:32.505 21:42:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.505 21:42:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.505 21:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.505 ************************************ 00:04:32.505 START TEST rpc_integrity 00:04:32.505 ************************************ 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:32.505 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.505 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.505 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.505 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.505 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.505 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.506 { 00:04:32.506 "name": "Malloc0", 00:04:32.506 "aliases": [ 00:04:32.506 "858f5559-fddf-43ff-94e7-2c51bc3ed446" 00:04:32.506 ], 00:04:32.506 "product_name": "Malloc disk", 00:04:32.506 "block_size": 512, 00:04:32.506 "num_blocks": 16384, 00:04:32.506 "uuid": "858f5559-fddf-43ff-94e7-2c51bc3ed446", 00:04:32.506 "assigned_rate_limits": { 00:04:32.506 "rw_ios_per_sec": 0, 00:04:32.506 "rw_mbytes_per_sec": 0, 00:04:32.506 "r_mbytes_per_sec": 0, 00:04:32.506 "w_mbytes_per_sec": 0 00:04:32.506 }, 00:04:32.506 "claimed": false, 00:04:32.506 "zoned": false, 00:04:32.506 "supported_io_types": { 00:04:32.506 "read": true, 00:04:32.506 "write": true, 00:04:32.506 "unmap": true, 00:04:32.506 "flush": true, 00:04:32.506 "reset": true, 00:04:32.506 "nvme_admin": false, 00:04:32.506 "nvme_io": false, 00:04:32.506 "nvme_io_md": false, 00:04:32.506 "write_zeroes": true, 00:04:32.506 "zcopy": true, 00:04:32.506 "get_zone_info": false, 00:04:32.506 "zone_management": false, 00:04:32.506 "zone_append": false, 00:04:32.506 "compare": false, 00:04:32.506 "compare_and_write": false, 00:04:32.506 "abort": true, 00:04:32.506 "seek_hole": false, 00:04:32.506 "seek_data": false, 00:04:32.506 "copy": true, 00:04:32.506 "nvme_iov_md": false 00:04:32.506 }, 00:04:32.506 "memory_domains": [ 00:04:32.506 { 00:04:32.506 "dma_device_id": "system", 00:04:32.506 "dma_device_type": 1 00:04:32.506 }, 00:04:32.506 { 00:04:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.506 "dma_device_type": 2 00:04:32.506 } 00:04:32.506 ], 00:04:32.506 "driver_specific": {} 00:04:32.506 } 00:04:32.506 ]' 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 [2024-07-15 21:42:26.563776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:32.506 [2024-07-15 21:42:26.563807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.506 [2024-07-15 21:42:26.563819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf948c0 00:04:32.506 [2024-07-15 21:42:26.563825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.506 [2024-07-15 21:42:26.565111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.506 [2024-07-15 21:42:26.565131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.506 Passthru0 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.506 { 00:04:32.506 "name": "Malloc0", 00:04:32.506 "aliases": [ 00:04:32.506 "858f5559-fddf-43ff-94e7-2c51bc3ed446" 00:04:32.506 ], 00:04:32.506 "product_name": "Malloc disk", 00:04:32.506 "block_size": 512, 00:04:32.506 "num_blocks": 16384, 00:04:32.506 "uuid": "858f5559-fddf-43ff-94e7-2c51bc3ed446", 00:04:32.506 "assigned_rate_limits": { 00:04:32.506 "rw_ios_per_sec": 0, 00:04:32.506 "rw_mbytes_per_sec": 0, 00:04:32.506 "r_mbytes_per_sec": 0, 00:04:32.506 "w_mbytes_per_sec": 0 00:04:32.506 }, 00:04:32.506 "claimed": true, 00:04:32.506 "claim_type": "exclusive_write", 00:04:32.506 "zoned": false, 00:04:32.506 "supported_io_types": { 00:04:32.506 "read": true, 00:04:32.506 "write": true, 00:04:32.506 "unmap": true, 00:04:32.506 "flush": true, 00:04:32.506 "reset": true, 00:04:32.506 "nvme_admin": false, 00:04:32.506 "nvme_io": false, 00:04:32.506 "nvme_io_md": false, 00:04:32.506 "write_zeroes": true, 00:04:32.506 "zcopy": true, 00:04:32.506 "get_zone_info": false, 00:04:32.506 "zone_management": false, 00:04:32.506 "zone_append": false, 00:04:32.506 "compare": false, 00:04:32.506 "compare_and_write": false, 00:04:32.506 "abort": true, 00:04:32.506 "seek_hole": false, 00:04:32.506 "seek_data": false, 00:04:32.506 "copy": true, 00:04:32.506 "nvme_iov_md": false 00:04:32.506 }, 00:04:32.506 "memory_domains": [ 00:04:32.506 { 00:04:32.506 "dma_device_id": "system", 00:04:32.506 "dma_device_type": 1 00:04:32.506 }, 00:04:32.506 { 00:04:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.506 "dma_device_type": 2 00:04:32.506 } 00:04:32.506 ], 00:04:32.506 "driver_specific": {} 00:04:32.506 }, 00:04:32.506 { 00:04:32.506 "name": "Passthru0", 00:04:32.506 "aliases": [ 00:04:32.506 "39ee416a-407d-58b9-bf1e-46bd7e93d268" 00:04:32.506 ], 00:04:32.506 "product_name": "passthru", 00:04:32.506 "block_size": 512, 00:04:32.506 "num_blocks": 16384, 00:04:32.506 "uuid": "39ee416a-407d-58b9-bf1e-46bd7e93d268", 00:04:32.506 "assigned_rate_limits": { 00:04:32.506 "rw_ios_per_sec": 0, 00:04:32.506 "rw_mbytes_per_sec": 0, 00:04:32.506 "r_mbytes_per_sec": 0, 00:04:32.506 "w_mbytes_per_sec": 0 00:04:32.506 }, 00:04:32.506 "claimed": false, 00:04:32.506 "zoned": false, 00:04:32.506 "supported_io_types": { 00:04:32.506 "read": true, 00:04:32.506 "write": true, 00:04:32.506 "unmap": true, 00:04:32.506 "flush": true, 00:04:32.506 "reset": true, 00:04:32.506 "nvme_admin": false, 00:04:32.506 "nvme_io": false, 00:04:32.506 "nvme_io_md": false, 00:04:32.506 "write_zeroes": true, 00:04:32.506 "zcopy": true, 00:04:32.506 "get_zone_info": false, 00:04:32.506 "zone_management": false, 00:04:32.506 "zone_append": false, 00:04:32.506 "compare": false, 00:04:32.506 "compare_and_write": false, 00:04:32.506 "abort": true, 00:04:32.506 "seek_hole": false, 00:04:32.506 "seek_data": false, 00:04:32.506 "copy": true, 00:04:32.506 "nvme_iov_md": false 00:04:32.506 }, 00:04:32.506 "memory_domains": [ 00:04:32.506 { 00:04:32.506 "dma_device_id": "system", 00:04:32.506 "dma_device_type": 1 00:04:32.506 }, 00:04:32.506 { 00:04:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.506 "dma_device_type": 2 00:04:32.506 } 00:04:32.506 ], 00:04:32.506 "driver_specific": { 00:04:32.506 "passthru": { 00:04:32.506 "name": "Passthru0", 00:04:32.506 "base_bdev_name": "Malloc0" 00:04:32.506 } 00:04:32.506 } 00:04:32.506 } 00:04:32.506 ]' 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.506 21:42:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.506 00:04:32.506 real 0m0.259s 00:04:32.506 user 0m0.166s 00:04:32.506 sys 0m0.032s 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.506 21:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.506 ************************************ 00:04:32.506 END TEST rpc_integrity 00:04:32.506 ************************************ 00:04:32.506 21:42:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.506 21:42:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.506 21:42:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.506 21:42:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.506 21:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 ************************************ 00:04:32.765 START TEST rpc_plugins 00:04:32.765 ************************************ 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.765 { 00:04:32.765 "name": "Malloc1", 00:04:32.765 "aliases": [ 00:04:32.765 "7ddabca7-3ce9-4241-89d1-16e4c7fceeab" 00:04:32.765 ], 00:04:32.765 "product_name": "Malloc disk", 00:04:32.765 "block_size": 4096, 00:04:32.765 "num_blocks": 256, 00:04:32.765 "uuid": "7ddabca7-3ce9-4241-89d1-16e4c7fceeab", 00:04:32.765 "assigned_rate_limits": { 00:04:32.765 "rw_ios_per_sec": 0, 00:04:32.765 "rw_mbytes_per_sec": 0, 00:04:32.765 "r_mbytes_per_sec": 0, 00:04:32.765 "w_mbytes_per_sec": 0 00:04:32.765 }, 00:04:32.765 "claimed": false, 00:04:32.765 "zoned": false, 00:04:32.765 "supported_io_types": { 00:04:32.765 "read": true, 00:04:32.765 "write": true, 00:04:32.765 "unmap": true, 00:04:32.765 "flush": true, 00:04:32.765 "reset": true, 00:04:32.765 "nvme_admin": false, 00:04:32.765 "nvme_io": false, 00:04:32.765 "nvme_io_md": false, 00:04:32.765 "write_zeroes": true, 00:04:32.765 "zcopy": true, 00:04:32.765 "get_zone_info": false, 00:04:32.765 "zone_management": false, 00:04:32.765 "zone_append": false, 00:04:32.765 "compare": false, 00:04:32.765 "compare_and_write": false, 00:04:32.765 "abort": true, 00:04:32.765 "seek_hole": false, 00:04:32.765 "seek_data": false, 00:04:32.765 "copy": true, 00:04:32.765 "nvme_iov_md": false 00:04:32.765 }, 00:04:32.765 "memory_domains": [ 00:04:32.765 { 00:04:32.765 "dma_device_id": "system", 00:04:32.765 "dma_device_type": 1 00:04:32.765 }, 00:04:32.765 { 00:04:32.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.765 "dma_device_type": 2 00:04:32.765 } 00:04:32.765 ], 00:04:32.765 "driver_specific": {} 00:04:32.765 } 00:04:32.765 ]' 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.765 21:42:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.765 00:04:32.765 real 0m0.132s 00:04:32.765 user 0m0.091s 00:04:32.765 sys 0m0.013s 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 ************************************ 00:04:32.765 END TEST rpc_plugins 00:04:32.765 ************************************ 00:04:32.765 21:42:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.765 21:42:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.765 21:42:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.765 21:42:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.765 21:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 ************************************ 00:04:32.765 START TEST rpc_trace_cmd_test 00:04:32.765 ************************************ 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.765 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3510999", 00:04:32.765 "tpoint_group_mask": "0x8", 00:04:32.765 "iscsi_conn": { 00:04:32.765 "mask": "0x2", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "scsi": { 00:04:32.765 "mask": "0x4", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "bdev": { 00:04:32.765 "mask": "0x8", 00:04:32.765 "tpoint_mask": "0xffffffffffffffff" 00:04:32.765 }, 00:04:32.765 "nvmf_rdma": { 00:04:32.765 "mask": "0x10", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "nvmf_tcp": { 00:04:32.765 "mask": "0x20", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "ftl": { 00:04:32.765 "mask": "0x40", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "blobfs": { 00:04:32.765 "mask": "0x80", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "dsa": { 00:04:32.765 "mask": "0x200", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "thread": { 00:04:32.765 "mask": "0x400", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "nvme_pcie": { 00:04:32.765 "mask": "0x800", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "iaa": { 00:04:32.765 "mask": "0x1000", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "nvme_tcp": { 00:04:32.765 "mask": "0x2000", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "bdev_nvme": { 00:04:32.765 "mask": "0x4000", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 }, 00:04:32.765 "sock": { 00:04:32.765 "mask": "0x8000", 00:04:32.765 "tpoint_mask": "0x0" 00:04:32.765 } 00:04:32.765 }' 00:04:32.765 21:42:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:33.024 00:04:33.024 real 0m0.218s 00:04:33.024 user 0m0.181s 00:04:33.024 sys 0m0.028s 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.024 21:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 ************************************ 00:04:33.024 END TEST rpc_trace_cmd_test 00:04:33.024 ************************************ 00:04:33.024 21:42:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:33.024 21:42:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:33.024 21:42:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:33.024 21:42:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:33.024 21:42:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.024 21:42:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.024 21:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 ************************************ 00:04:33.024 START TEST rpc_daemon_integrity 00:04:33.024 ************************************ 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.024 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.283 { 00:04:33.283 "name": "Malloc2", 00:04:33.283 "aliases": [ 00:04:33.283 "68f9cb28-531a-4de2-a63f-84814b3fc969" 00:04:33.283 ], 00:04:33.283 "product_name": "Malloc disk", 00:04:33.283 "block_size": 512, 00:04:33.283 "num_blocks": 16384, 00:04:33.283 "uuid": "68f9cb28-531a-4de2-a63f-84814b3fc969", 00:04:33.283 "assigned_rate_limits": { 00:04:33.283 "rw_ios_per_sec": 0, 00:04:33.283 "rw_mbytes_per_sec": 0, 00:04:33.283 "r_mbytes_per_sec": 0, 00:04:33.283 "w_mbytes_per_sec": 0 00:04:33.283 }, 00:04:33.283 "claimed": false, 00:04:33.283 "zoned": false, 00:04:33.283 "supported_io_types": { 00:04:33.283 "read": true, 00:04:33.283 "write": true, 00:04:33.283 "unmap": true, 00:04:33.283 "flush": true, 00:04:33.283 "reset": true, 00:04:33.283 "nvme_admin": false, 00:04:33.283 "nvme_io": false, 00:04:33.283 "nvme_io_md": false, 00:04:33.283 "write_zeroes": true, 00:04:33.283 "zcopy": true, 00:04:33.283 "get_zone_info": false, 00:04:33.283 "zone_management": false, 00:04:33.283 "zone_append": false, 00:04:33.283 "compare": false, 00:04:33.283 "compare_and_write": false, 00:04:33.283 "abort": true, 00:04:33.283 "seek_hole": false, 00:04:33.283 "seek_data": false, 00:04:33.283 "copy": true, 00:04:33.283 "nvme_iov_md": false 00:04:33.283 }, 00:04:33.283 "memory_domains": [ 00:04:33.283 { 00:04:33.283 "dma_device_id": "system", 00:04:33.283 "dma_device_type": 1 00:04:33.283 }, 00:04:33.283 { 00:04:33.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.283 "dma_device_type": 2 00:04:33.283 } 00:04:33.283 ], 00:04:33.283 "driver_specific": {} 00:04:33.283 } 00:04:33.283 ]' 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.283 [2024-07-15 21:42:27.374116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:33.283 [2024-07-15 21:42:27.374145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.283 [2024-07-15 21:42:27.374160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf95210 00:04:33.283 [2024-07-15 21:42:27.374166] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.283 [2024-07-15 21:42:27.375129] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.283 [2024-07-15 21:42:27.375150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.283 Passthru0 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.283 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.283 { 00:04:33.283 "name": "Malloc2", 00:04:33.283 "aliases": [ 00:04:33.283 "68f9cb28-531a-4de2-a63f-84814b3fc969" 00:04:33.283 ], 00:04:33.283 "product_name": "Malloc disk", 00:04:33.283 "block_size": 512, 00:04:33.283 "num_blocks": 16384, 00:04:33.283 "uuid": "68f9cb28-531a-4de2-a63f-84814b3fc969", 00:04:33.283 "assigned_rate_limits": { 00:04:33.283 "rw_ios_per_sec": 0, 00:04:33.284 "rw_mbytes_per_sec": 0, 00:04:33.284 "r_mbytes_per_sec": 0, 00:04:33.284 "w_mbytes_per_sec": 0 00:04:33.284 }, 00:04:33.284 "claimed": true, 00:04:33.284 "claim_type": "exclusive_write", 00:04:33.284 "zoned": false, 00:04:33.284 "supported_io_types": { 00:04:33.284 "read": true, 00:04:33.284 "write": true, 00:04:33.284 "unmap": true, 00:04:33.284 "flush": true, 00:04:33.284 "reset": true, 00:04:33.284 "nvme_admin": false, 00:04:33.284 "nvme_io": false, 00:04:33.284 "nvme_io_md": false, 00:04:33.284 "write_zeroes": true, 00:04:33.284 "zcopy": true, 00:04:33.284 "get_zone_info": false, 00:04:33.284 "zone_management": false, 00:04:33.284 "zone_append": false, 00:04:33.284 "compare": false, 00:04:33.284 "compare_and_write": false, 00:04:33.284 "abort": true, 00:04:33.284 "seek_hole": false, 00:04:33.284 "seek_data": false, 00:04:33.284 "copy": true, 00:04:33.284 "nvme_iov_md": false 00:04:33.284 }, 00:04:33.284 "memory_domains": [ 00:04:33.284 { 00:04:33.284 "dma_device_id": "system", 00:04:33.284 "dma_device_type": 1 00:04:33.284 }, 00:04:33.284 { 00:04:33.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.284 "dma_device_type": 2 00:04:33.284 } 00:04:33.284 ], 00:04:33.284 "driver_specific": {} 00:04:33.284 }, 00:04:33.284 { 00:04:33.284 "name": "Passthru0", 00:04:33.284 "aliases": [ 00:04:33.284 "ef3b59e9-3d62-5aaf-8c86-f0aaae9e4c12" 00:04:33.284 ], 00:04:33.284 "product_name": "passthru", 00:04:33.284 "block_size": 512, 00:04:33.284 "num_blocks": 16384, 00:04:33.284 "uuid": "ef3b59e9-3d62-5aaf-8c86-f0aaae9e4c12", 00:04:33.284 "assigned_rate_limits": { 00:04:33.284 "rw_ios_per_sec": 0, 00:04:33.284 "rw_mbytes_per_sec": 0, 00:04:33.284 "r_mbytes_per_sec": 0, 00:04:33.284 "w_mbytes_per_sec": 0 00:04:33.284 }, 00:04:33.284 "claimed": false, 00:04:33.284 "zoned": false, 00:04:33.284 "supported_io_types": { 00:04:33.284 "read": true, 00:04:33.284 "write": true, 00:04:33.284 "unmap": true, 00:04:33.284 "flush": true, 00:04:33.284 "reset": true, 00:04:33.284 "nvme_admin": false, 00:04:33.284 "nvme_io": false, 00:04:33.284 "nvme_io_md": false, 00:04:33.284 "write_zeroes": true, 00:04:33.284 "zcopy": true, 00:04:33.284 "get_zone_info": false, 00:04:33.284 "zone_management": false, 00:04:33.284 "zone_append": false, 00:04:33.284 "compare": false, 00:04:33.284 "compare_and_write": false, 00:04:33.284 "abort": true, 00:04:33.284 "seek_hole": false, 00:04:33.284 "seek_data": false, 00:04:33.284 "copy": true, 00:04:33.284 "nvme_iov_md": false 00:04:33.284 }, 00:04:33.284 "memory_domains": [ 00:04:33.284 { 00:04:33.284 "dma_device_id": "system", 00:04:33.284 "dma_device_type": 1 00:04:33.284 }, 00:04:33.284 { 00:04:33.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.284 "dma_device_type": 2 00:04:33.284 } 00:04:33.284 ], 00:04:33.284 "driver_specific": { 00:04:33.284 "passthru": { 00:04:33.284 "name": "Passthru0", 00:04:33.284 "base_bdev_name": "Malloc2" 00:04:33.284 } 00:04:33.284 } 00:04:33.284 } 00:04:33.284 ]' 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.284 00:04:33.284 real 0m0.256s 00:04:33.284 user 0m0.160s 00:04:33.284 sys 0m0.043s 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.284 21:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.284 ************************************ 00:04:33.284 END TEST rpc_daemon_integrity 00:04:33.284 ************************************ 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:33.543 21:42:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:33.543 21:42:27 rpc -- rpc/rpc.sh@84 -- # killprocess 3510999 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@948 -- # '[' -z 3510999 ']' 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@952 -- # kill -0 3510999 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@953 -- # uname 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3510999 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3510999' 00:04:33.543 killing process with pid 3510999 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@967 -- # kill 3510999 00:04:33.543 21:42:27 rpc -- common/autotest_common.sh@972 -- # wait 3510999 00:04:33.801 00:04:33.801 real 0m2.421s 00:04:33.801 user 0m3.124s 00:04:33.801 sys 0m0.667s 00:04:33.801 21:42:27 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.801 21:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.801 ************************************ 00:04:33.801 END TEST rpc 00:04:33.801 ************************************ 00:04:33.801 21:42:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.801 21:42:27 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.801 21:42:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.801 21:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.801 21:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.801 ************************************ 00:04:33.801 START TEST skip_rpc 00:04:33.801 ************************************ 00:04:33.801 21:42:27 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.801 * Looking for test storage... 00:04:33.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.801 21:42:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.801 21:42:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:33.801 21:42:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.801 21:42:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.801 21:42:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.801 21:42:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.060 ************************************ 00:04:34.060 START TEST skip_rpc 00:04:34.060 ************************************ 00:04:34.060 21:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:34.060 21:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3511631 00:04:34.060 21:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.060 21:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.060 21:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.060 [2024-07-15 21:42:28.118159] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:34.060 [2024-07-15 21:42:28.118194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511631 ] 00:04:34.060 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.060 [2024-07-15 21:42:28.171172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.060 [2024-07-15 21:42:28.243756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.328 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3511631 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3511631 ']' 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3511631 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3511631 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3511631' 00:04:39.329 killing process with pid 3511631 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3511631 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3511631 00:04:39.329 00:04:39.329 real 0m5.366s 00:04:39.329 user 0m5.137s 00:04:39.329 sys 0m0.253s 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.329 21:42:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.329 ************************************ 00:04:39.329 END TEST skip_rpc 00:04:39.329 ************************************ 00:04:39.329 21:42:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.329 21:42:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:39.329 21:42:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.329 21:42:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.329 21:42:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.329 ************************************ 00:04:39.329 START TEST skip_rpc_with_json 00:04:39.329 ************************************ 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3512577 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3512577 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3512577 ']' 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.329 21:42:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.329 [2024-07-15 21:42:33.552514] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:39.329 [2024-07-15 21:42:33.552559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512577 ] 00:04:39.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.588 [2024-07-15 21:42:33.607705] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.588 [2024-07-15 21:42:33.676565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.155 [2024-07-15 21:42:34.357734] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:40.155 request: 00:04:40.155 { 00:04:40.155 "trtype": "tcp", 00:04:40.155 "method": "nvmf_get_transports", 00:04:40.155 "req_id": 1 00:04:40.155 } 00:04:40.155 Got JSON-RPC error response 00:04:40.155 response: 00:04:40.155 { 00:04:40.155 "code": -19, 00:04:40.155 "message": "No such device" 00:04:40.155 } 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.155 [2024-07-15 21:42:34.369846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.155 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.414 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.414 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.414 { 00:04:40.414 "subsystems": [ 00:04:40.414 { 00:04:40.414 "subsystem": "vfio_user_target", 00:04:40.414 "config": null 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "keyring", 00:04:40.414 "config": [] 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "iobuf", 00:04:40.414 "config": [ 00:04:40.414 { 00:04:40.414 "method": "iobuf_set_options", 00:04:40.414 "params": { 00:04:40.414 "small_pool_count": 8192, 00:04:40.414 "large_pool_count": 1024, 00:04:40.414 "small_bufsize": 8192, 00:04:40.414 "large_bufsize": 135168 00:04:40.414 } 00:04:40.414 } 00:04:40.414 ] 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "sock", 00:04:40.414 "config": [ 00:04:40.414 { 00:04:40.414 "method": "sock_set_default_impl", 00:04:40.414 "params": { 00:04:40.414 "impl_name": "posix" 00:04:40.414 } 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "method": "sock_impl_set_options", 00:04:40.414 "params": { 00:04:40.414 "impl_name": "ssl", 00:04:40.414 "recv_buf_size": 4096, 00:04:40.414 "send_buf_size": 4096, 00:04:40.414 "enable_recv_pipe": true, 00:04:40.414 "enable_quickack": false, 00:04:40.414 "enable_placement_id": 0, 00:04:40.414 "enable_zerocopy_send_server": true, 00:04:40.414 "enable_zerocopy_send_client": false, 00:04:40.414 "zerocopy_threshold": 0, 00:04:40.414 "tls_version": 0, 00:04:40.414 "enable_ktls": false 00:04:40.414 } 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "method": "sock_impl_set_options", 00:04:40.414 "params": { 00:04:40.414 "impl_name": "posix", 00:04:40.414 "recv_buf_size": 2097152, 00:04:40.414 "send_buf_size": 2097152, 00:04:40.414 "enable_recv_pipe": true, 00:04:40.414 "enable_quickack": false, 00:04:40.414 "enable_placement_id": 0, 00:04:40.414 "enable_zerocopy_send_server": true, 00:04:40.414 "enable_zerocopy_send_client": false, 00:04:40.414 "zerocopy_threshold": 0, 00:04:40.414 "tls_version": 0, 00:04:40.414 "enable_ktls": false 00:04:40.414 } 00:04:40.414 } 00:04:40.414 ] 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "vmd", 00:04:40.414 "config": [] 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "accel", 00:04:40.414 "config": [ 00:04:40.414 { 00:04:40.414 "method": "accel_set_options", 00:04:40.414 "params": { 00:04:40.414 "small_cache_size": 128, 00:04:40.414 "large_cache_size": 16, 00:04:40.414 "task_count": 2048, 00:04:40.414 "sequence_count": 2048, 00:04:40.414 "buf_count": 2048 00:04:40.414 } 00:04:40.414 } 00:04:40.414 ] 00:04:40.414 }, 00:04:40.414 { 00:04:40.414 "subsystem": "bdev", 00:04:40.414 "config": [ 00:04:40.414 { 00:04:40.414 "method": "bdev_set_options", 00:04:40.414 "params": { 00:04:40.414 "bdev_io_pool_size": 65535, 00:04:40.414 "bdev_io_cache_size": 256, 00:04:40.414 "bdev_auto_examine": true, 00:04:40.414 "iobuf_small_cache_size": 128, 00:04:40.414 "iobuf_large_cache_size": 16 00:04:40.414 } 00:04:40.414 }, 00:04:40.415 { 00:04:40.415 "method": "bdev_raid_set_options", 00:04:40.415 "params": { 00:04:40.415 "process_window_size_kb": 1024 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "bdev_iscsi_set_options", 00:04:40.415 "params": { 00:04:40.415 "timeout_sec": 30 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "bdev_nvme_set_options", 00:04:40.415 "params": { 00:04:40.415 "action_on_timeout": "none", 00:04:40.415 "timeout_us": 0, 00:04:40.415 "timeout_admin_us": 0, 00:04:40.415 "keep_alive_timeout_ms": 10000, 00:04:40.415 "arbitration_burst": 0, 00:04:40.415 "low_priority_weight": 0, 00:04:40.415 "medium_priority_weight": 0, 00:04:40.415 "high_priority_weight": 0, 00:04:40.415 "nvme_adminq_poll_period_us": 10000, 00:04:40.415 "nvme_ioq_poll_period_us": 0, 00:04:40.415 "io_queue_requests": 0, 00:04:40.415 "delay_cmd_submit": true, 00:04:40.415 "transport_retry_count": 4, 00:04:40.415 "bdev_retry_count": 3, 00:04:40.415 "transport_ack_timeout": 0, 00:04:40.415 "ctrlr_loss_timeout_sec": 0, 00:04:40.415 "reconnect_delay_sec": 0, 00:04:40.415 "fast_io_fail_timeout_sec": 0, 00:04:40.415 "disable_auto_failback": false, 00:04:40.415 "generate_uuids": false, 00:04:40.415 "transport_tos": 0, 00:04:40.415 "nvme_error_stat": false, 00:04:40.415 "rdma_srq_size": 0, 00:04:40.415 "io_path_stat": false, 00:04:40.415 "allow_accel_sequence": false, 00:04:40.415 "rdma_max_cq_size": 0, 00:04:40.415 "rdma_cm_event_timeout_ms": 0, 00:04:40.415 "dhchap_digests": [ 00:04:40.415 "sha256", 00:04:40.415 "sha384", 00:04:40.415 "sha512" 00:04:40.415 ], 00:04:40.415 "dhchap_dhgroups": [ 00:04:40.415 "null", 00:04:40.415 "ffdhe2048", 00:04:40.415 "ffdhe3072", 00:04:40.415 "ffdhe4096", 00:04:40.415 "ffdhe6144", 00:04:40.415 "ffdhe8192" 00:04:40.415 ] 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "bdev_nvme_set_hotplug", 00:04:40.415 "params": { 00:04:40.415 "period_us": 100000, 00:04:40.415 "enable": false 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "bdev_wait_for_examine" 00:04:40.415 } 00:04:40.415 ] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "scsi", 00:04:40.415 "config": null 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "scheduler", 00:04:40.415 "config": [ 00:04:40.415 { 00:04:40.415 "method": "framework_set_scheduler", 00:04:40.415 "params": { 00:04:40.415 "name": "static" 00:04:40.415 } 00:04:40.415 } 00:04:40.415 ] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "vhost_scsi", 00:04:40.415 "config": [] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "vhost_blk", 00:04:40.415 "config": [] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "ublk", 00:04:40.415 "config": [] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "nbd", 00:04:40.415 "config": [] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "nvmf", 00:04:40.415 "config": [ 00:04:40.415 { 00:04:40.415 "method": "nvmf_set_config", 00:04:40.415 "params": { 00:04:40.415 "discovery_filter": "match_any", 00:04:40.415 "admin_cmd_passthru": { 00:04:40.415 "identify_ctrlr": false 00:04:40.415 } 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "nvmf_set_max_subsystems", 00:04:40.415 "params": { 00:04:40.415 "max_subsystems": 1024 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "nvmf_set_crdt", 00:04:40.415 "params": { 00:04:40.415 "crdt1": 0, 00:04:40.415 "crdt2": 0, 00:04:40.415 "crdt3": 0 00:04:40.415 } 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "method": "nvmf_create_transport", 00:04:40.415 "params": { 00:04:40.415 "trtype": "TCP", 00:04:40.415 "max_queue_depth": 128, 00:04:40.415 "max_io_qpairs_per_ctrlr": 127, 00:04:40.415 "in_capsule_data_size": 4096, 00:04:40.415 "max_io_size": 131072, 00:04:40.415 "io_unit_size": 131072, 00:04:40.415 "max_aq_depth": 128, 00:04:40.415 "num_shared_buffers": 511, 00:04:40.415 "buf_cache_size": 4294967295, 00:04:40.415 "dif_insert_or_strip": false, 00:04:40.415 "zcopy": false, 00:04:40.415 "c2h_success": true, 00:04:40.415 "sock_priority": 0, 00:04:40.415 "abort_timeout_sec": 1, 00:04:40.415 "ack_timeout": 0, 00:04:40.415 "data_wr_pool_size": 0 00:04:40.415 } 00:04:40.415 } 00:04:40.415 ] 00:04:40.415 }, 00:04:40.415 { 00:04:40.415 "subsystem": "iscsi", 00:04:40.415 "config": [ 00:04:40.415 { 00:04:40.415 "method": "iscsi_set_options", 00:04:40.415 "params": { 00:04:40.415 "node_base": "iqn.2016-06.io.spdk", 00:04:40.415 "max_sessions": 128, 00:04:40.415 "max_connections_per_session": 2, 00:04:40.415 "max_queue_depth": 64, 00:04:40.415 "default_time2wait": 2, 00:04:40.415 "default_time2retain": 20, 00:04:40.415 "first_burst_length": 8192, 00:04:40.415 "immediate_data": true, 00:04:40.415 "allow_duplicated_isid": false, 00:04:40.415 "error_recovery_level": 0, 00:04:40.415 "nop_timeout": 60, 00:04:40.415 "nop_in_interval": 30, 00:04:40.415 "disable_chap": false, 00:04:40.415 "require_chap": false, 00:04:40.415 "mutual_chap": false, 00:04:40.415 "chap_group": 0, 00:04:40.415 "max_large_datain_per_connection": 64, 00:04:40.415 "max_r2t_per_connection": 4, 00:04:40.415 "pdu_pool_size": 36864, 00:04:40.415 "immediate_data_pool_size": 16384, 00:04:40.415 "data_out_pool_size": 2048 00:04:40.415 } 00:04:40.415 } 00:04:40.415 ] 00:04:40.415 } 00:04:40.415 ] 00:04:40.415 } 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3512577 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3512577 ']' 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3512577 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3512577 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3512577' 00:04:40.415 killing process with pid 3512577 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3512577 00:04:40.415 21:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3512577 00:04:40.674 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3512818 00:04:40.674 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.674 21:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3512818 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3512818 ']' 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3512818 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3512818 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3512818' 00:04:45.946 killing process with pid 3512818 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3512818 00:04:45.946 21:42:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3512818 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.205 00:04:46.205 real 0m6.746s 00:04:46.205 user 0m6.571s 00:04:46.205 sys 0m0.588s 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.205 ************************************ 00:04:46.205 END TEST skip_rpc_with_json 00:04:46.205 ************************************ 00:04:46.205 21:42:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.205 21:42:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:46.205 21:42:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.205 21:42:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.205 21:42:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.205 ************************************ 00:04:46.205 START TEST skip_rpc_with_delay 00:04:46.205 ************************************ 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:46.205 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.206 [2024-07-15 21:42:40.368854] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.206 [2024-07-15 21:42:40.368913] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.206 00:04:46.206 real 0m0.066s 00:04:46.206 user 0m0.041s 00:04:46.206 sys 0m0.024s 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.206 21:42:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:46.206 ************************************ 00:04:46.206 END TEST skip_rpc_with_delay 00:04:46.206 ************************************ 00:04:46.206 21:42:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.206 21:42:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.206 21:42:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.206 21:42:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.206 21:42:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.206 21:42:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.206 21:42:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.206 ************************************ 00:04:46.206 START TEST exit_on_failed_rpc_init 00:04:46.206 ************************************ 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3513787 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3513787 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3513787 ']' 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.464 21:42:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.464 [2024-07-15 21:42:40.493605] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:46.464 [2024-07-15 21:42:40.493644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513787 ] 00:04:46.464 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.464 [2024-07-15 21:42:40.547216] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.464 [2024-07-15 21:42:40.626628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.400 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.401 [2024-07-15 21:42:41.350933] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:47.401 [2024-07-15 21:42:41.350978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513975 ] 00:04:47.401 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.401 [2024-07-15 21:42:41.403856] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.401 [2024-07-15 21:42:41.477153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.401 [2024-07-15 21:42:41.477214] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:47.401 [2024-07-15 21:42:41.477223] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:47.401 [2024-07-15 21:42:41.477234] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3513787 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3513787 ']' 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3513787 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3513787 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3513787' 00:04:47.401 killing process with pid 3513787 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3513787 00:04:47.401 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3513787 00:04:47.660 00:04:47.660 real 0m1.455s 00:04:47.660 user 0m1.692s 00:04:47.660 sys 0m0.383s 00:04:47.921 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.921 21:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.921 ************************************ 00:04:47.921 END TEST exit_on_failed_rpc_init 00:04:47.921 ************************************ 00:04:47.921 21:42:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:47.921 21:42:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.921 00:04:47.921 real 0m13.988s 00:04:47.921 user 0m13.571s 00:04:47.921 sys 0m1.499s 00:04:47.921 21:42:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.921 21:42:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.921 ************************************ 00:04:47.921 END TEST skip_rpc 00:04:47.921 ************************************ 00:04:47.921 21:42:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.921 21:42:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.921 21:42:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.921 21:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.921 21:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:47.921 ************************************ 00:04:47.921 START TEST rpc_client 00:04:47.921 ************************************ 00:04:47.921 21:42:41 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.921 * Looking for test storage... 00:04:47.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:47.921 21:42:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:47.921 OK 00:04:47.921 21:42:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.921 00:04:47.921 real 0m0.106s 00:04:47.921 user 0m0.055s 00:04:47.921 sys 0m0.058s 00:04:47.921 21:42:42 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.921 21:42:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.921 ************************************ 00:04:47.921 END TEST rpc_client 00:04:47.921 ************************************ 00:04:47.921 21:42:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.921 21:42:42 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:47.921 21:42:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.921 21:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.921 21:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:47.921 ************************************ 00:04:47.921 START TEST json_config 00:04:47.921 ************************************ 00:04:47.921 21:42:42 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.181 21:42:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.181 21:42:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.182 21:42:42 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.182 21:42:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.182 21:42:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.182 21:42:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.182 21:42:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.182 21:42:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.182 21:42:42 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.182 21:42:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@47 -- # : 0 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.182 21:42:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:48.182 INFO: JSON configuration test init 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 21:42:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:48.182 21:42:42 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.182 21:42:42 json_config -- json_config/common.sh@10 -- # shift 00:04:48.182 21:42:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.182 21:42:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.182 21:42:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.182 21:42:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.182 21:42:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.182 21:42:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3514143 00:04:48.182 21:42:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.182 Waiting for target to run... 00:04:48.182 21:42:42 json_config -- json_config/common.sh@25 -- # waitforlisten 3514143 /var/tmp/spdk_tgt.sock 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 3514143 ']' 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.182 21:42:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.182 21:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 [2024-07-15 21:42:42.313466] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:48.182 [2024-07-15 21:42:42.313517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514143 ] 00:04:48.182 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.749 [2024-07-15 21:42:42.761847] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.749 [2024-07-15 21:42:42.853516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:49.020 21:42:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:49.020 00:04:49.020 21:42:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:49.020 21:42:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.020 21:42:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:49.020 21:42:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.020 21:42:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.021 21:42:43 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:49.021 21:42:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:49.021 21:42:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:52.314 21:42:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.314 21:42:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:52.314 21:42:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.314 21:42:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.573 MallocForNvmf0 00:04:52.573 21:42:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.573 21:42:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.573 MallocForNvmf1 00:04:52.573 21:42:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.573 21:42:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.830 [2024-07-15 21:42:46.938696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.830 21:42:46 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:52.830 21:42:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.088 21:42:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.088 21:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.088 21:42:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.088 21:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.345 21:42:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.345 21:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.603 [2024-07-15 21:42:47.612793] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.603 21:42:47 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:53.603 21:42:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.603 21:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.603 21:42:47 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:53.603 21:42:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.603 21:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.603 21:42:47 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:53.603 21:42:47 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.603 21:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.889 MallocBdevForConfigChangeCheck 00:04:53.889 21:42:47 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:53.889 21:42:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.890 21:42:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.890 21:42:47 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:53.890 21:42:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.147 21:42:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:54.147 INFO: shutting down applications... 00:04:54.147 21:42:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:54.147 21:42:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:54.147 21:42:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:54.147 21:42:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.048 Calling clear_iscsi_subsystem 00:04:56.048 Calling clear_nvmf_subsystem 00:04:56.048 Calling clear_nbd_subsystem 00:04:56.048 Calling clear_ublk_subsystem 00:04:56.048 Calling clear_vhost_blk_subsystem 00:04:56.048 Calling clear_vhost_scsi_subsystem 00:04:56.048 Calling clear_bdev_subsystem 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.048 21:42:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.048 21:42:50 json_config -- json_config/json_config.sh@345 -- # break 00:04:56.048 21:42:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:56.048 21:42:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:56.048 21:42:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:56.048 21:42:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.048 21:42:50 json_config -- json_config/common.sh@35 -- # [[ -n 3514143 ]] 00:04:56.048 21:42:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3514143 00:04:56.048 21:42:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.048 21:42:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.048 21:42:50 json_config -- json_config/common.sh@41 -- # kill -0 3514143 00:04:56.048 21:42:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.616 21:42:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.616 21:42:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.616 21:42:50 json_config -- json_config/common.sh@41 -- # kill -0 3514143 00:04:56.616 21:42:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.616 21:42:50 json_config -- json_config/common.sh@43 -- # break 00:04:56.616 21:42:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.616 21:42:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.616 SPDK target shutdown done 00:04:56.616 21:42:50 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:56.616 INFO: relaunching applications... 00:04:56.616 21:42:50 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.616 21:42:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.616 21:42:50 json_config -- json_config/common.sh@10 -- # shift 00:04:56.616 21:42:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.616 21:42:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.616 21:42:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.616 21:42:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.616 21:42:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.616 21:42:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3515666 00:04:56.616 21:42:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.616 21:42:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.616 Waiting for target to run... 00:04:56.616 21:42:50 json_config -- json_config/common.sh@25 -- # waitforlisten 3515666 /var/tmp/spdk_tgt.sock 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 3515666 ']' 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.616 21:42:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.616 [2024-07-15 21:42:50.648216] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:04:56.616 [2024-07-15 21:42:50.648283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515666 ] 00:04:56.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.875 [2024-07-15 21:42:51.095268] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.135 [2024-07-15 21:42:51.187702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.423 [2024-07-15 21:42:54.198768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.423 [2024-07-15 21:42:54.231066] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.681 21:42:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.681 21:42:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:00.681 21:42:54 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.681 00:05:00.681 21:42:54 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:00.681 21:42:54 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:00.681 INFO: Checking if target configuration is the same... 00:05:00.681 21:42:54 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.681 21:42:54 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:00.681 21:42:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.681 + '[' 2 -ne 2 ']' 00:05:00.681 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:00.681 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:00.681 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.681 +++ basename /dev/fd/62 00:05:00.681 ++ mktemp /tmp/62.XXX 00:05:00.681 + tmp_file_1=/tmp/62.gxT 00:05:00.681 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.681 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.681 + tmp_file_2=/tmp/spdk_tgt_config.json.BMS 00:05:00.681 + ret=0 00:05:00.681 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.939 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.939 + diff -u /tmp/62.gxT /tmp/spdk_tgt_config.json.BMS 00:05:00.939 + echo 'INFO: JSON config files are the same' 00:05:00.939 INFO: JSON config files are the same 00:05:00.939 + rm /tmp/62.gxT /tmp/spdk_tgt_config.json.BMS 00:05:00.939 + exit 0 00:05:00.939 21:42:55 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:00.939 21:42:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:00.939 INFO: changing configuration and checking if this can be detected... 00:05:00.939 21:42:55 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.939 21:42:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:01.197 21:42:55 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.197 21:42:55 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:01.197 21:42:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.197 + '[' 2 -ne 2 ']' 00:05:01.197 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:01.197 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:01.197 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.197 +++ basename /dev/fd/62 00:05:01.197 ++ mktemp /tmp/62.XXX 00:05:01.197 + tmp_file_1=/tmp/62.nmT 00:05:01.197 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.197 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:01.197 + tmp_file_2=/tmp/spdk_tgt_config.json.h4D 00:05:01.197 + ret=0 00:05:01.197 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.455 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.455 + diff -u /tmp/62.nmT /tmp/spdk_tgt_config.json.h4D 00:05:01.455 + ret=1 00:05:01.455 + echo '=== Start of file: /tmp/62.nmT ===' 00:05:01.455 + cat /tmp/62.nmT 00:05:01.714 + echo '=== End of file: /tmp/62.nmT ===' 00:05:01.714 + echo '' 00:05:01.714 + echo '=== Start of file: /tmp/spdk_tgt_config.json.h4D ===' 00:05:01.714 + cat /tmp/spdk_tgt_config.json.h4D 00:05:01.714 + echo '=== End of file: /tmp/spdk_tgt_config.json.h4D ===' 00:05:01.714 + echo '' 00:05:01.714 + rm /tmp/62.nmT /tmp/spdk_tgt_config.json.h4D 00:05:01.714 + exit 1 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:01.714 INFO: configuration change detected. 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@317 -- # [[ -n 3515666 ]] 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.714 21:42:55 json_config -- json_config/json_config.sh@323 -- # killprocess 3515666 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@948 -- # '[' -z 3515666 ']' 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@952 -- # kill -0 3515666 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@953 -- # uname 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3515666 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3515666' 00:05:01.714 killing process with pid 3515666 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@967 -- # kill 3515666 00:05:01.714 21:42:55 json_config -- common/autotest_common.sh@972 -- # wait 3515666 00:05:03.089 21:42:57 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.089 21:42:57 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:03.089 21:42:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.089 21:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 21:42:57 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:03.089 21:42:57 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:03.089 INFO: Success 00:05:03.089 00:05:03.089 real 0m15.141s 00:05:03.089 user 0m15.706s 00:05:03.089 sys 0m2.020s 00:05:03.089 21:42:57 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.089 21:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 ************************************ 00:05:03.089 END TEST json_config 00:05:03.089 ************************************ 00:05:03.347 21:42:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.347 21:42:57 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:03.347 21:42:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.347 21:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.347 21:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.347 ************************************ 00:05:03.347 START TEST json_config_extra_key 00:05:03.347 ************************************ 00:05:03.347 21:42:57 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:03.347 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.347 21:42:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.348 21:42:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.348 21:42:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.348 21:42:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.348 21:42:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.348 21:42:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.348 21:42:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.348 21:42:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:03.348 21:42:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:03.348 21:42:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:03.348 INFO: launching applications... 00:05:03.348 21:42:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3516967 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.348 Waiting for target to run... 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3516967 /var/tmp/spdk_tgt.sock 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3516967 ']' 00:05:03.348 21:42:57 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.348 21:42:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.348 [2024-07-15 21:42:57.503975] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:03.348 [2024-07-15 21:42:57.504027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516967 ] 00:05:03.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.607 [2024-07-15 21:42:57.784597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.866 [2024-07-15 21:42:57.852621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.124 21:42:58 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.124 21:42:58 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.124 00:05:04.124 21:42:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.124 INFO: shutting down applications... 00:05:04.124 21:42:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3516967 ]] 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3516967 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3516967 00:05:04.124 21:42:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3516967 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.690 21:42:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.690 SPDK target shutdown done 00:05:04.690 21:42:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:04.690 Success 00:05:04.690 00:05:04.690 real 0m1.449s 00:05:04.690 user 0m1.240s 00:05:04.690 sys 0m0.362s 00:05:04.690 21:42:58 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.690 21:42:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:04.690 ************************************ 00:05:04.690 END TEST json_config_extra_key 00:05:04.690 ************************************ 00:05:04.690 21:42:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.690 21:42:58 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.690 21:42:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.690 21:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.690 21:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:04.690 ************************************ 00:05:04.690 START TEST alias_rpc 00:05:04.690 ************************************ 00:05:04.690 21:42:58 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.948 * Looking for test storage... 00:05:04.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:04.948 21:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.948 21:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3517357 00:05:04.948 21:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3517357 00:05:04.948 21:42:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3517357 ']' 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.948 21:42:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.948 [2024-07-15 21:42:59.020908] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:04.949 [2024-07-15 21:42:59.020960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517357 ] 00:05:04.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.949 [2024-07-15 21:42:59.076011] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.949 [2024-07-15 21:42:59.148002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.884 21:42:59 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.884 21:42:59 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:05.884 21:42:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:05.884 21:43:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3517357 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3517357 ']' 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3517357 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3517357 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3517357' 00:05:05.884 killing process with pid 3517357 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@967 -- # kill 3517357 00:05:05.884 21:43:00 alias_rpc -- common/autotest_common.sh@972 -- # wait 3517357 00:05:06.142 00:05:06.142 real 0m1.497s 00:05:06.142 user 0m1.650s 00:05:06.142 sys 0m0.390s 00:05:06.142 21:43:00 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.142 21:43:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.142 ************************************ 00:05:06.142 END TEST alias_rpc 00:05:06.142 ************************************ 00:05:06.401 21:43:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.401 21:43:00 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:06.401 21:43:00 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:06.401 21:43:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.401 21:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.401 21:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:06.401 ************************************ 00:05:06.401 START TEST spdkcli_tcp 00:05:06.401 ************************************ 00:05:06.401 21:43:00 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:06.401 * Looking for test storage... 00:05:06.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:06.401 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:06.401 21:43:00 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.401 21:43:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.402 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3517699 00:05:06.402 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3517699 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3517699 ']' 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.402 21:43:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.402 21:43:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:06.402 [2024-07-15 21:43:00.584819] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:06.402 [2024-07-15 21:43:00.584868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517699 ] 00:05:06.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.402 [2024-07-15 21:43:00.639237] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.663 [2024-07-15 21:43:00.719118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.663 [2024-07-15 21:43:00.719121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.234 21:43:01 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.234 21:43:01 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:07.234 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3517719 00:05:07.234 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:07.234 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:07.493 [ 00:05:07.493 "bdev_malloc_delete", 00:05:07.493 "bdev_malloc_create", 00:05:07.493 "bdev_null_resize", 00:05:07.493 "bdev_null_delete", 00:05:07.493 "bdev_null_create", 00:05:07.493 "bdev_nvme_cuse_unregister", 00:05:07.493 "bdev_nvme_cuse_register", 00:05:07.493 "bdev_opal_new_user", 00:05:07.493 "bdev_opal_set_lock_state", 00:05:07.493 "bdev_opal_delete", 00:05:07.493 "bdev_opal_get_info", 00:05:07.493 "bdev_opal_create", 00:05:07.493 "bdev_nvme_opal_revert", 00:05:07.493 "bdev_nvme_opal_init", 00:05:07.493 "bdev_nvme_send_cmd", 00:05:07.493 "bdev_nvme_get_path_iostat", 00:05:07.493 "bdev_nvme_get_mdns_discovery_info", 00:05:07.493 "bdev_nvme_stop_mdns_discovery", 00:05:07.493 "bdev_nvme_start_mdns_discovery", 00:05:07.493 "bdev_nvme_set_multipath_policy", 00:05:07.493 "bdev_nvme_set_preferred_path", 00:05:07.493 "bdev_nvme_get_io_paths", 00:05:07.493 "bdev_nvme_remove_error_injection", 00:05:07.493 "bdev_nvme_add_error_injection", 00:05:07.493 "bdev_nvme_get_discovery_info", 00:05:07.493 "bdev_nvme_stop_discovery", 00:05:07.493 "bdev_nvme_start_discovery", 00:05:07.493 "bdev_nvme_get_controller_health_info", 00:05:07.493 "bdev_nvme_disable_controller", 00:05:07.493 "bdev_nvme_enable_controller", 00:05:07.493 "bdev_nvme_reset_controller", 00:05:07.493 "bdev_nvme_get_transport_statistics", 00:05:07.493 "bdev_nvme_apply_firmware", 00:05:07.493 "bdev_nvme_detach_controller", 00:05:07.493 "bdev_nvme_get_controllers", 00:05:07.493 "bdev_nvme_attach_controller", 00:05:07.493 "bdev_nvme_set_hotplug", 00:05:07.493 "bdev_nvme_set_options", 00:05:07.493 "bdev_passthru_delete", 00:05:07.493 "bdev_passthru_create", 00:05:07.493 "bdev_lvol_set_parent_bdev", 00:05:07.493 "bdev_lvol_set_parent", 00:05:07.493 "bdev_lvol_check_shallow_copy", 00:05:07.493 "bdev_lvol_start_shallow_copy", 00:05:07.493 "bdev_lvol_grow_lvstore", 00:05:07.493 "bdev_lvol_get_lvols", 00:05:07.494 "bdev_lvol_get_lvstores", 00:05:07.494 "bdev_lvol_delete", 00:05:07.494 "bdev_lvol_set_read_only", 00:05:07.494 "bdev_lvol_resize", 00:05:07.494 "bdev_lvol_decouple_parent", 00:05:07.494 "bdev_lvol_inflate", 00:05:07.494 "bdev_lvol_rename", 00:05:07.494 "bdev_lvol_clone_bdev", 00:05:07.494 "bdev_lvol_clone", 00:05:07.494 "bdev_lvol_snapshot", 00:05:07.494 "bdev_lvol_create", 00:05:07.494 "bdev_lvol_delete_lvstore", 00:05:07.494 "bdev_lvol_rename_lvstore", 00:05:07.494 "bdev_lvol_create_lvstore", 00:05:07.494 "bdev_raid_set_options", 00:05:07.494 "bdev_raid_remove_base_bdev", 00:05:07.494 "bdev_raid_add_base_bdev", 00:05:07.494 "bdev_raid_delete", 00:05:07.494 "bdev_raid_create", 00:05:07.494 "bdev_raid_get_bdevs", 00:05:07.494 "bdev_error_inject_error", 00:05:07.494 "bdev_error_delete", 00:05:07.494 "bdev_error_create", 00:05:07.494 "bdev_split_delete", 00:05:07.494 "bdev_split_create", 00:05:07.494 "bdev_delay_delete", 00:05:07.494 "bdev_delay_create", 00:05:07.494 "bdev_delay_update_latency", 00:05:07.494 "bdev_zone_block_delete", 00:05:07.494 "bdev_zone_block_create", 00:05:07.494 "blobfs_create", 00:05:07.494 "blobfs_detect", 00:05:07.494 "blobfs_set_cache_size", 00:05:07.494 "bdev_aio_delete", 00:05:07.494 "bdev_aio_rescan", 00:05:07.494 "bdev_aio_create", 00:05:07.494 "bdev_ftl_set_property", 00:05:07.494 "bdev_ftl_get_properties", 00:05:07.494 "bdev_ftl_get_stats", 00:05:07.494 "bdev_ftl_unmap", 00:05:07.494 "bdev_ftl_unload", 00:05:07.494 "bdev_ftl_delete", 00:05:07.494 "bdev_ftl_load", 00:05:07.494 "bdev_ftl_create", 00:05:07.494 "bdev_virtio_attach_controller", 00:05:07.494 "bdev_virtio_scsi_get_devices", 00:05:07.494 "bdev_virtio_detach_controller", 00:05:07.494 "bdev_virtio_blk_set_hotplug", 00:05:07.494 "bdev_iscsi_delete", 00:05:07.494 "bdev_iscsi_create", 00:05:07.494 "bdev_iscsi_set_options", 00:05:07.494 "accel_error_inject_error", 00:05:07.494 "ioat_scan_accel_module", 00:05:07.494 "dsa_scan_accel_module", 00:05:07.494 "iaa_scan_accel_module", 00:05:07.494 "vfu_virtio_create_scsi_endpoint", 00:05:07.494 "vfu_virtio_scsi_remove_target", 00:05:07.494 "vfu_virtio_scsi_add_target", 00:05:07.494 "vfu_virtio_create_blk_endpoint", 00:05:07.494 "vfu_virtio_delete_endpoint", 00:05:07.494 "keyring_file_remove_key", 00:05:07.494 "keyring_file_add_key", 00:05:07.494 "keyring_linux_set_options", 00:05:07.494 "iscsi_get_histogram", 00:05:07.494 "iscsi_enable_histogram", 00:05:07.494 "iscsi_set_options", 00:05:07.494 "iscsi_get_auth_groups", 00:05:07.494 "iscsi_auth_group_remove_secret", 00:05:07.494 "iscsi_auth_group_add_secret", 00:05:07.494 "iscsi_delete_auth_group", 00:05:07.494 "iscsi_create_auth_group", 00:05:07.494 "iscsi_set_discovery_auth", 00:05:07.494 "iscsi_get_options", 00:05:07.494 "iscsi_target_node_request_logout", 00:05:07.494 "iscsi_target_node_set_redirect", 00:05:07.494 "iscsi_target_node_set_auth", 00:05:07.494 "iscsi_target_node_add_lun", 00:05:07.494 "iscsi_get_stats", 00:05:07.494 "iscsi_get_connections", 00:05:07.494 "iscsi_portal_group_set_auth", 00:05:07.494 "iscsi_start_portal_group", 00:05:07.494 "iscsi_delete_portal_group", 00:05:07.494 "iscsi_create_portal_group", 00:05:07.494 "iscsi_get_portal_groups", 00:05:07.494 "iscsi_delete_target_node", 00:05:07.494 "iscsi_target_node_remove_pg_ig_maps", 00:05:07.494 "iscsi_target_node_add_pg_ig_maps", 00:05:07.494 "iscsi_create_target_node", 00:05:07.494 "iscsi_get_target_nodes", 00:05:07.494 "iscsi_delete_initiator_group", 00:05:07.494 "iscsi_initiator_group_remove_initiators", 00:05:07.494 "iscsi_initiator_group_add_initiators", 00:05:07.494 "iscsi_create_initiator_group", 00:05:07.494 "iscsi_get_initiator_groups", 00:05:07.494 "nvmf_set_crdt", 00:05:07.494 "nvmf_set_config", 00:05:07.494 "nvmf_set_max_subsystems", 00:05:07.494 "nvmf_stop_mdns_prr", 00:05:07.494 "nvmf_publish_mdns_prr", 00:05:07.494 "nvmf_subsystem_get_listeners", 00:05:07.494 "nvmf_subsystem_get_qpairs", 00:05:07.494 "nvmf_subsystem_get_controllers", 00:05:07.494 "nvmf_get_stats", 00:05:07.494 "nvmf_get_transports", 00:05:07.494 "nvmf_create_transport", 00:05:07.494 "nvmf_get_targets", 00:05:07.494 "nvmf_delete_target", 00:05:07.494 "nvmf_create_target", 00:05:07.494 "nvmf_subsystem_allow_any_host", 00:05:07.494 "nvmf_subsystem_remove_host", 00:05:07.494 "nvmf_subsystem_add_host", 00:05:07.494 "nvmf_ns_remove_host", 00:05:07.494 "nvmf_ns_add_host", 00:05:07.494 "nvmf_subsystem_remove_ns", 00:05:07.494 "nvmf_subsystem_add_ns", 00:05:07.494 "nvmf_subsystem_listener_set_ana_state", 00:05:07.494 "nvmf_discovery_get_referrals", 00:05:07.494 "nvmf_discovery_remove_referral", 00:05:07.494 "nvmf_discovery_add_referral", 00:05:07.494 "nvmf_subsystem_remove_listener", 00:05:07.494 "nvmf_subsystem_add_listener", 00:05:07.494 "nvmf_delete_subsystem", 00:05:07.494 "nvmf_create_subsystem", 00:05:07.494 "nvmf_get_subsystems", 00:05:07.494 "env_dpdk_get_mem_stats", 00:05:07.494 "nbd_get_disks", 00:05:07.494 "nbd_stop_disk", 00:05:07.494 "nbd_start_disk", 00:05:07.494 "ublk_recover_disk", 00:05:07.494 "ublk_get_disks", 00:05:07.494 "ublk_stop_disk", 00:05:07.494 "ublk_start_disk", 00:05:07.494 "ublk_destroy_target", 00:05:07.494 "ublk_create_target", 00:05:07.494 "virtio_blk_create_transport", 00:05:07.494 "virtio_blk_get_transports", 00:05:07.494 "vhost_controller_set_coalescing", 00:05:07.494 "vhost_get_controllers", 00:05:07.494 "vhost_delete_controller", 00:05:07.494 "vhost_create_blk_controller", 00:05:07.494 "vhost_scsi_controller_remove_target", 00:05:07.494 "vhost_scsi_controller_add_target", 00:05:07.494 "vhost_start_scsi_controller", 00:05:07.494 "vhost_create_scsi_controller", 00:05:07.494 "thread_set_cpumask", 00:05:07.494 "framework_get_governor", 00:05:07.494 "framework_get_scheduler", 00:05:07.494 "framework_set_scheduler", 00:05:07.494 "framework_get_reactors", 00:05:07.494 "thread_get_io_channels", 00:05:07.494 "thread_get_pollers", 00:05:07.494 "thread_get_stats", 00:05:07.494 "framework_monitor_context_switch", 00:05:07.494 "spdk_kill_instance", 00:05:07.494 "log_enable_timestamps", 00:05:07.494 "log_get_flags", 00:05:07.494 "log_clear_flag", 00:05:07.494 "log_set_flag", 00:05:07.494 "log_get_level", 00:05:07.494 "log_set_level", 00:05:07.494 "log_get_print_level", 00:05:07.494 "log_set_print_level", 00:05:07.494 "framework_enable_cpumask_locks", 00:05:07.494 "framework_disable_cpumask_locks", 00:05:07.494 "framework_wait_init", 00:05:07.494 "framework_start_init", 00:05:07.494 "scsi_get_devices", 00:05:07.494 "bdev_get_histogram", 00:05:07.494 "bdev_enable_histogram", 00:05:07.494 "bdev_set_qos_limit", 00:05:07.494 "bdev_set_qd_sampling_period", 00:05:07.494 "bdev_get_bdevs", 00:05:07.494 "bdev_reset_iostat", 00:05:07.494 "bdev_get_iostat", 00:05:07.494 "bdev_examine", 00:05:07.494 "bdev_wait_for_examine", 00:05:07.494 "bdev_set_options", 00:05:07.494 "notify_get_notifications", 00:05:07.494 "notify_get_types", 00:05:07.494 "accel_get_stats", 00:05:07.494 "accel_set_options", 00:05:07.494 "accel_set_driver", 00:05:07.494 "accel_crypto_key_destroy", 00:05:07.494 "accel_crypto_keys_get", 00:05:07.494 "accel_crypto_key_create", 00:05:07.494 "accel_assign_opc", 00:05:07.494 "accel_get_module_info", 00:05:07.494 "accel_get_opc_assignments", 00:05:07.494 "vmd_rescan", 00:05:07.494 "vmd_remove_device", 00:05:07.494 "vmd_enable", 00:05:07.494 "sock_get_default_impl", 00:05:07.494 "sock_set_default_impl", 00:05:07.494 "sock_impl_set_options", 00:05:07.494 "sock_impl_get_options", 00:05:07.494 "iobuf_get_stats", 00:05:07.494 "iobuf_set_options", 00:05:07.494 "keyring_get_keys", 00:05:07.494 "framework_get_pci_devices", 00:05:07.494 "framework_get_config", 00:05:07.494 "framework_get_subsystems", 00:05:07.494 "vfu_tgt_set_base_path", 00:05:07.494 "trace_get_info", 00:05:07.494 "trace_get_tpoint_group_mask", 00:05:07.494 "trace_disable_tpoint_group", 00:05:07.494 "trace_enable_tpoint_group", 00:05:07.494 "trace_clear_tpoint_mask", 00:05:07.494 "trace_set_tpoint_mask", 00:05:07.494 "spdk_get_version", 00:05:07.494 "rpc_get_methods" 00:05:07.494 ] 00:05:07.494 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.494 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:07.494 21:43:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3517699 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3517699 ']' 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3517699 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3517699 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.494 21:43:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.495 21:43:01 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3517699' 00:05:07.495 killing process with pid 3517699 00:05:07.495 21:43:01 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3517699 00:05:07.495 21:43:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3517699 00:05:07.754 00:05:07.754 real 0m1.487s 00:05:07.754 user 0m2.760s 00:05:07.754 sys 0m0.424s 00:05:07.754 21:43:01 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.754 21:43:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.754 ************************************ 00:05:07.754 END TEST spdkcli_tcp 00:05:07.754 ************************************ 00:05:07.754 21:43:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.754 21:43:01 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:07.754 21:43:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.754 21:43:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.754 21:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:08.013 ************************************ 00:05:08.013 START TEST dpdk_mem_utility 00:05:08.013 ************************************ 00:05:08.013 21:43:01 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:08.013 * Looking for test storage... 00:05:08.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:08.013 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:08.013 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.013 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3518007 00:05:08.013 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3518007 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3518007 ']' 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.013 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.013 [2024-07-15 21:43:02.116468] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:08.013 [2024-07-15 21:43:02.116516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518007 ] 00:05:08.013 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.013 [2024-07-15 21:43:02.171496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.013 [2024-07-15 21:43:02.252055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.950 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.950 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:08.950 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:08.950 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:08.950 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.950 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.950 { 00:05:08.950 "filename": "/tmp/spdk_mem_dump.txt" 00:05:08.950 } 00:05:08.950 21:43:02 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.950 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:08.950 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:08.950 1 heaps totaling size 814.000000 MiB 00:05:08.950 size: 814.000000 MiB heap id: 0 00:05:08.950 end heaps---------- 00:05:08.950 8 mempools totaling size 598.116089 MiB 00:05:08.950 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:08.950 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:08.950 size: 84.521057 MiB name: bdev_io_3518007 00:05:08.950 size: 51.011292 MiB name: evtpool_3518007 00:05:08.950 size: 50.003479 MiB name: msgpool_3518007 00:05:08.950 size: 21.763794 MiB name: PDU_Pool 00:05:08.950 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:08.950 size: 0.026123 MiB name: Session_Pool 00:05:08.950 end mempools------- 00:05:08.950 6 memzones totaling size 4.142822 MiB 00:05:08.950 size: 1.000366 MiB name: RG_ring_0_3518007 00:05:08.950 size: 1.000366 MiB name: RG_ring_1_3518007 00:05:08.950 size: 1.000366 MiB name: RG_ring_4_3518007 00:05:08.950 size: 1.000366 MiB name: RG_ring_5_3518007 00:05:08.950 size: 0.125366 MiB name: RG_ring_2_3518007 00:05:08.950 size: 0.015991 MiB name: RG_ring_3_3518007 00:05:08.950 end memzones------- 00:05:08.950 21:43:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:08.950 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:08.950 list of free elements. size: 12.519348 MiB 00:05:08.950 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:08.950 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:08.950 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:08.950 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:08.950 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:08.950 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:08.950 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:08.950 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:08.950 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:08.950 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:08.950 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:08.950 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:08.950 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:08.950 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:08.950 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:08.950 list of standard malloc elements. size: 199.218079 MiB 00:05:08.950 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:08.950 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:08.950 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:08.950 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:08.950 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:08.950 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:08.950 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:08.950 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:08.950 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:08.950 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:08.950 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:08.950 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:08.950 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:08.950 list of memzone associated elements. size: 602.262573 MiB 00:05:08.950 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:08.950 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:08.950 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:08.951 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:08.951 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:08.951 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3518007_0 00:05:08.951 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:08.951 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3518007_0 00:05:08.951 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:08.951 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3518007_0 00:05:08.951 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:08.951 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:08.951 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:08.951 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:08.951 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:08.951 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3518007 00:05:08.951 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:08.951 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3518007 00:05:08.951 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:08.951 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3518007 00:05:08.951 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:08.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:08.951 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:08.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:08.951 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:08.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:08.951 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:08.951 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:08.951 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:08.951 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3518007 00:05:08.951 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:08.951 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3518007 00:05:08.951 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:08.951 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3518007 00:05:08.951 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:08.951 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3518007 00:05:08.951 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:08.951 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3518007 00:05:08.951 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:08.951 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:08.951 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:08.951 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:08.951 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:08.951 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:08.951 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:08.951 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3518007 00:05:08.951 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:08.951 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:08.951 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:08.951 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:08.951 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:08.951 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3518007 00:05:08.951 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:08.951 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:08.951 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:08.951 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3518007 00:05:08.951 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:08.951 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3518007 00:05:08.951 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:08.951 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:08.951 21:43:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:08.951 21:43:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3518007 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3518007 ']' 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3518007 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3518007 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3518007' 00:05:08.951 killing process with pid 3518007 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3518007 00:05:08.951 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3518007 00:05:09.209 00:05:09.209 real 0m1.387s 00:05:09.210 user 0m1.479s 00:05:09.210 sys 0m0.377s 00:05:09.210 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.210 21:43:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.210 ************************************ 00:05:09.210 END TEST dpdk_mem_utility 00:05:09.210 ************************************ 00:05:09.210 21:43:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.210 21:43:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:09.210 21:43:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.210 21:43:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.210 21:43:03 -- common/autotest_common.sh@10 -- # set +x 00:05:09.210 ************************************ 00:05:09.210 START TEST event 00:05:09.210 ************************************ 00:05:09.210 21:43:03 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:09.467 * Looking for test storage... 00:05:09.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:09.467 21:43:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:09.467 21:43:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:09.467 21:43:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.467 21:43:03 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:09.467 21:43:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.467 21:43:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.467 ************************************ 00:05:09.467 START TEST event_perf 00:05:09.467 ************************************ 00:05:09.467 21:43:03 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.467 Running I/O for 1 seconds...[2024-07-15 21:43:03.582303] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:09.467 [2024-07-15 21:43:03.582367] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518295 ] 00:05:09.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.467 [2024-07-15 21:43:03.641764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.726 [2024-07-15 21:43:03.717991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.726 [2024-07-15 21:43:03.718008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.726 [2024-07-15 21:43:03.718026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.726 [2024-07-15 21:43:03.718027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.733 Running I/O for 1 seconds... 00:05:10.733 lcore 0: 203844 00:05:10.733 lcore 1: 203843 00:05:10.733 lcore 2: 203844 00:05:10.733 lcore 3: 203844 00:05:10.733 done. 00:05:10.733 00:05:10.733 real 0m1.227s 00:05:10.733 user 0m4.147s 00:05:10.733 sys 0m0.077s 00:05:10.733 21:43:04 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.733 21:43:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.733 ************************************ 00:05:10.733 END TEST event_perf 00:05:10.733 ************************************ 00:05:10.733 21:43:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.733 21:43:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:10.733 21:43:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:10.733 21:43:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.733 21:43:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.733 ************************************ 00:05:10.733 START TEST event_reactor 00:05:10.733 ************************************ 00:05:10.733 21:43:04 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:10.733 [2024-07-15 21:43:04.872938] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:10.733 [2024-07-15 21:43:04.873006] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518545 ] 00:05:10.733 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.733 [2024-07-15 21:43:04.935198] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.019 [2024-07-15 21:43:05.007199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.049 test_start 00:05:12.049 oneshot 00:05:12.049 tick 100 00:05:12.049 tick 100 00:05:12.049 tick 250 00:05:12.049 tick 100 00:05:12.049 tick 100 00:05:12.049 tick 250 00:05:12.049 tick 100 00:05:12.049 tick 500 00:05:12.049 tick 100 00:05:12.049 tick 100 00:05:12.049 tick 250 00:05:12.049 tick 100 00:05:12.049 tick 100 00:05:12.049 test_end 00:05:12.049 00:05:12.049 real 0m1.221s 00:05:12.049 user 0m1.138s 00:05:12.049 sys 0m0.078s 00:05:12.049 21:43:06 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.049 21:43:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:12.049 ************************************ 00:05:12.049 END TEST event_reactor 00:05:12.049 ************************************ 00:05:12.049 21:43:06 event -- common/autotest_common.sh@1142 -- # return 0 00:05:12.049 21:43:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.049 21:43:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:12.049 21:43:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.049 21:43:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.049 ************************************ 00:05:12.049 START TEST event_reactor_perf 00:05:12.049 ************************************ 00:05:12.049 21:43:06 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.049 [2024-07-15 21:43:06.163675] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:12.049 [2024-07-15 21:43:06.163744] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518799 ] 00:05:12.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.049 [2024-07-15 21:43:06.224856] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.308 [2024-07-15 21:43:06.295286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.244 test_start 00:05:13.244 test_end 00:05:13.244 Performance: 502303 events per second 00:05:13.244 00:05:13.244 real 0m1.221s 00:05:13.244 user 0m1.142s 00:05:13.244 sys 0m0.075s 00:05:13.244 21:43:07 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.244 21:43:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.244 ************************************ 00:05:13.244 END TEST event_reactor_perf 00:05:13.244 ************************************ 00:05:13.244 21:43:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:13.244 21:43:07 event -- event/event.sh@49 -- # uname -s 00:05:13.244 21:43:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.244 21:43:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:13.244 21:43:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.244 21:43:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.244 21:43:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.244 ************************************ 00:05:13.244 START TEST event_scheduler 00:05:13.244 ************************************ 00:05:13.244 21:43:07 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:13.503 * Looking for test storage... 00:05:13.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:13.503 21:43:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.503 21:43:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3519075 00:05:13.503 21:43:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.503 21:43:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.503 21:43:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3519075 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3519075 ']' 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.503 21:43:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.503 [2024-07-15 21:43:07.565813] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:13.503 [2024-07-15 21:43:07.565864] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519075 ] 00:05:13.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.503 [2024-07-15 21:43:07.618477] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.503 [2024-07-15 21:43:07.701172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.503 [2024-07-15 21:43:07.701256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.503 [2024-07-15 21:43:07.701361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.503 [2024-07-15 21:43:07.701363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:14.438 21:43:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 [2024-07-15 21:43:08.387720] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:14.438 [2024-07-15 21:43:08.387737] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.438 [2024-07-15 21:43:08.387746] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.438 [2024-07-15 21:43:08.387754] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.438 [2024-07-15 21:43:08.387760] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 [2024-07-15 21:43:08.460437] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 ************************************ 00:05:14.438 START TEST scheduler_create_thread 00:05:14.438 ************************************ 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 2 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 3 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 4 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 5 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 6 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 7 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 8 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 9 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 10 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.438 21:43:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.005 21:43:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.005 21:43:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.005 21:43:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.005 21:43:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.380 21:43:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.380 21:43:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.380 21:43:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.380 21:43:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.380 21:43:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 21:43:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.756 00:05:17.756 real 0m3.099s 00:05:17.756 user 0m0.022s 00:05:17.756 sys 0m0.006s 00:05:17.756 21:43:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.756 21:43:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 ************************************ 00:05:17.756 END TEST scheduler_create_thread 00:05:17.756 ************************************ 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:17.756 21:43:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:17.756 21:43:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3519075 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3519075 ']' 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3519075 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3519075 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3519075' 00:05:17.756 killing process with pid 3519075 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3519075 00:05:17.756 21:43:11 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3519075 00:05:17.756 [2024-07-15 21:43:11.971725] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:18.014 00:05:18.014 real 0m4.747s 00:05:18.014 user 0m9.269s 00:05:18.014 sys 0m0.365s 00:05:18.014 21:43:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.014 21:43:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.014 ************************************ 00:05:18.014 END TEST event_scheduler 00:05:18.014 ************************************ 00:05:18.014 21:43:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.015 21:43:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:18.015 21:43:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:18.015 21:43:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.015 21:43:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.015 21:43:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 ************************************ 00:05:18.274 START TEST app_repeat 00:05:18.274 ************************************ 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3519827 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3519827' 00:05:18.274 Process app_repeat pid: 3519827 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:18.274 spdk_app_start Round 0 00:05:18.274 21:43:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3519827 /var/tmp/spdk-nbd.sock 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3519827 ']' 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.274 21:43:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.274 [2024-07-15 21:43:12.293218] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:18.274 [2024-07-15 21:43:12.293276] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519827 ] 00:05:18.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.274 [2024-07-15 21:43:12.347857] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.274 [2024-07-15 21:43:12.421311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.274 [2024-07-15 21:43:12.421313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.219 21:43:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.219 21:43:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.219 21:43:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.219 Malloc0 00:05:19.219 21:43:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.219 Malloc1 00:05:19.479 21:43:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.479 /dev/nbd0 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.479 1+0 records in 00:05:19.479 1+0 records out 00:05:19.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018157 s, 22.6 MB/s 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.479 21:43:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.479 21:43:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.737 /dev/nbd1 00:05:19.737 21:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.737 21:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.737 21:43:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.738 1+0 records in 00:05:19.738 1+0 records out 00:05:19.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197868 s, 20.7 MB/s 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.738 21:43:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.738 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.738 21:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.738 21:43:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.738 21:43:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.738 21:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.996 { 00:05:19.996 "nbd_device": "/dev/nbd0", 00:05:19.996 "bdev_name": "Malloc0" 00:05:19.996 }, 00:05:19.996 { 00:05:19.996 "nbd_device": "/dev/nbd1", 00:05:19.996 "bdev_name": "Malloc1" 00:05:19.996 } 00:05:19.996 ]' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.996 { 00:05:19.996 "nbd_device": "/dev/nbd0", 00:05:19.996 "bdev_name": "Malloc0" 00:05:19.996 }, 00:05:19.996 { 00:05:19.996 "nbd_device": "/dev/nbd1", 00:05:19.996 "bdev_name": "Malloc1" 00:05:19.996 } 00:05:19.996 ]' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.996 /dev/nbd1' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.996 /dev/nbd1' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.996 256+0 records in 00:05:19.996 256+0 records out 00:05:19.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00824365 s, 127 MB/s 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.996 256+0 records in 00:05:19.996 256+0 records out 00:05:19.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135904 s, 77.2 MB/s 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.996 256+0 records in 00:05:19.996 256+0 records out 00:05:19.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143836 s, 72.9 MB/s 00:05:19.996 21:43:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.997 21:43:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.256 21:43:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.514 21:43:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.773 21:43:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.773 21:43:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.773 21:43:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.031 [2024-07-15 21:43:15.174990] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.031 [2024-07-15 21:43:15.243174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.031 [2024-07-15 21:43:15.243177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.290 [2024-07-15 21:43:15.285150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.290 [2024-07-15 21:43:15.285191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.819 21:43:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.819 21:43:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.819 spdk_app_start Round 1 00:05:23.819 21:43:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3519827 /var/tmp/spdk-nbd.sock 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3519827 ']' 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.819 21:43:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.078 21:43:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.078 21:43:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.078 21:43:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.336 Malloc0 00:05:24.336 21:43:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.336 Malloc1 00:05:24.336 21:43:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.336 21:43:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.594 /dev/nbd0 00:05:24.595 21:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.595 21:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.595 1+0 records in 00:05:24.595 1+0 records out 00:05:24.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187016 s, 21.9 MB/s 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.595 21:43:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.595 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.595 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.595 21:43:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.854 /dev/nbd1 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.854 1+0 records in 00:05:24.854 1+0 records out 00:05:24.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188598 s, 21.7 MB/s 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.854 21:43:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.854 21:43:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.855 21:43:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.114 { 00:05:25.114 "nbd_device": "/dev/nbd0", 00:05:25.114 "bdev_name": "Malloc0" 00:05:25.114 }, 00:05:25.114 { 00:05:25.114 "nbd_device": "/dev/nbd1", 00:05:25.114 "bdev_name": "Malloc1" 00:05:25.114 } 00:05:25.114 ]' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.114 { 00:05:25.114 "nbd_device": "/dev/nbd0", 00:05:25.114 "bdev_name": "Malloc0" 00:05:25.114 }, 00:05:25.114 { 00:05:25.114 "nbd_device": "/dev/nbd1", 00:05:25.114 "bdev_name": "Malloc1" 00:05:25.114 } 00:05:25.114 ]' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.114 /dev/nbd1' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.114 /dev/nbd1' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.114 256+0 records in 00:05:25.114 256+0 records out 00:05:25.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103359 s, 101 MB/s 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.114 256+0 records in 00:05:25.114 256+0 records out 00:05:25.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014002 s, 74.9 MB/s 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.114 256+0 records in 00:05:25.114 256+0 records out 00:05:25.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149656 s, 70.1 MB/s 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.114 21:43:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.374 21:43:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.634 21:43:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.634 21:43:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.894 21:43:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.153 [2024-07-15 21:43:20.249171] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.153 [2024-07-15 21:43:20.337964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.153 [2024-07-15 21:43:20.337969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.153 [2024-07-15 21:43:20.387526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.153 [2024-07-15 21:43:20.387575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.442 spdk_app_start Round 2 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3519827 /var/tmp/spdk-nbd.sock 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3519827 ']' 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.442 21:43:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.442 Malloc0 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.442 Malloc1 00:05:29.442 21:43:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.442 21:43:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.443 21:43:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.701 /dev/nbd0 00:05:29.701 21:43:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.701 21:43:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.701 1+0 records in 00:05:29.701 1+0 records out 00:05:29.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018071 s, 22.7 MB/s 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.701 21:43:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.701 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.701 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.701 21:43:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.959 /dev/nbd1 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.959 1+0 records in 00:05:29.959 1+0 records out 00:05:29.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188783 s, 21.7 MB/s 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.959 21:43:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.959 21:43:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.959 { 00:05:29.959 "nbd_device": "/dev/nbd0", 00:05:29.959 "bdev_name": "Malloc0" 00:05:29.959 }, 00:05:29.959 { 00:05:29.959 "nbd_device": "/dev/nbd1", 00:05:29.959 "bdev_name": "Malloc1" 00:05:29.959 } 00:05:29.959 ]' 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.959 { 00:05:29.959 "nbd_device": "/dev/nbd0", 00:05:29.959 "bdev_name": "Malloc0" 00:05:29.959 }, 00:05:29.959 { 00:05:29.959 "nbd_device": "/dev/nbd1", 00:05:29.959 "bdev_name": "Malloc1" 00:05:29.959 } 00:05:29.959 ]' 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.959 /dev/nbd1' 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.959 /dev/nbd1' 00:05:29.959 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.218 256+0 records in 00:05:30.218 256+0 records out 00:05:30.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103458 s, 101 MB/s 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.218 256+0 records in 00:05:30.218 256+0 records out 00:05:30.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138728 s, 75.6 MB/s 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.218 256+0 records in 00:05:30.218 256+0 records out 00:05:30.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146552 s, 71.5 MB/s 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.218 21:43:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.477 21:43:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.736 21:43:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.736 21:43:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.994 21:43:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.253 [2024-07-15 21:43:25.268451] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.253 [2024-07-15 21:43:25.334532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.253 [2024-07-15 21:43:25.334534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.253 [2024-07-15 21:43:25.376210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.253 [2024-07-15 21:43:25.376253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.541 21:43:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3519827 /var/tmp/spdk-nbd.sock 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3519827 ']' 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.541 21:43:28 event.app_repeat -- event/event.sh@39 -- # killprocess 3519827 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3519827 ']' 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3519827 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3519827 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3519827' 00:05:34.541 killing process with pid 3519827 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3519827 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3519827 00:05:34.541 spdk_app_start is called in Round 0. 00:05:34.541 Shutdown signal received, stop current app iteration 00:05:34.541 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:34.541 spdk_app_start is called in Round 1. 00:05:34.541 Shutdown signal received, stop current app iteration 00:05:34.541 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:34.541 spdk_app_start is called in Round 2. 00:05:34.541 Shutdown signal received, stop current app iteration 00:05:34.541 Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 reinitialization... 00:05:34.541 spdk_app_start is called in Round 3. 00:05:34.541 Shutdown signal received, stop current app iteration 00:05:34.541 21:43:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.541 21:43:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:34.541 00:05:34.541 real 0m16.206s 00:05:34.541 user 0m35.027s 00:05:34.541 sys 0m2.361s 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.541 21:43:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 ************************************ 00:05:34.541 END TEST app_repeat 00:05:34.541 ************************************ 00:05:34.541 21:43:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.541 21:43:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:34.541 21:43:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:34.541 21:43:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.541 21:43:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.541 21:43:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 ************************************ 00:05:34.541 START TEST cpu_locks 00:05:34.541 ************************************ 00:05:34.541 21:43:28 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:34.541 * Looking for test storage... 00:05:34.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:34.541 21:43:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:34.541 21:43:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:34.541 21:43:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:34.541 21:43:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:34.541 21:43:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.542 21:43:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.542 21:43:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.542 ************************************ 00:05:34.542 START TEST default_locks 00:05:34.542 ************************************ 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3522809 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3522809 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3522809 ']' 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.542 21:43:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.542 [2024-07-15 21:43:28.709594] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:34.542 [2024-07-15 21:43:28.709633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522809 ] 00:05:34.542 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.542 [2024-07-15 21:43:28.764029] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.800 [2024-07-15 21:43:28.844040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.368 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.368 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:35.368 21:43:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3522809 00:05:35.368 21:43:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3522809 00:05:35.368 21:43:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.626 lslocks: write error 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3522809 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3522809 ']' 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3522809 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.626 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3522809 00:05:35.885 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.885 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.885 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3522809' 00:05:35.885 killing process with pid 3522809 00:05:35.885 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3522809 00:05:35.885 21:43:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3522809 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3522809 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3522809 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3522809 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3522809 ']' 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3522809) - No such process 00:05:36.144 ERROR: process (pid: 3522809) is no longer running 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.144 00:05:36.144 real 0m1.543s 00:05:36.144 user 0m1.607s 00:05:36.144 sys 0m0.501s 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.144 21:43:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.144 ************************************ 00:05:36.144 END TEST default_locks 00:05:36.144 ************************************ 00:05:36.144 21:43:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:36.144 21:43:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.144 21:43:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.144 21:43:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.144 21:43:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.144 ************************************ 00:05:36.144 START TEST default_locks_via_rpc 00:05:36.144 ************************************ 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3523076 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3523076 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3523076 ']' 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.144 21:43:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.144 [2024-07-15 21:43:30.311774] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:36.144 [2024-07-15 21:43:30.311820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523076 ] 00:05:36.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.144 [2024-07-15 21:43:30.367311] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.404 [2024-07-15 21:43:30.447452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3523076 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3523076 00:05:36.972 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3523076 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3523076 ']' 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3523076 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3523076 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3523076' 00:05:37.232 killing process with pid 3523076 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3523076 00:05:37.232 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3523076 00:05:37.491 00:05:37.491 real 0m1.433s 00:05:37.491 user 0m1.523s 00:05:37.491 sys 0m0.444s 00:05:37.491 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.491 21:43:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.491 ************************************ 00:05:37.491 END TEST default_locks_via_rpc 00:05:37.491 ************************************ 00:05:37.491 21:43:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.491 21:43:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:37.491 21:43:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.491 21:43:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.491 21:43:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.750 ************************************ 00:05:37.750 START TEST non_locking_app_on_locked_coremask 00:05:37.750 ************************************ 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3523394 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3523394 /var/tmp/spdk.sock 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3523394 ']' 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.750 21:43:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.750 [2024-07-15 21:43:31.812678] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:37.750 [2024-07-15 21:43:31.812717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523394 ] 00:05:37.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.750 [2024-07-15 21:43:31.866869] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.750 [2024-07-15 21:43:31.946901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3523568 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3523568 /var/tmp/spdk2.sock 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3523568 ']' 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.687 21:43:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.687 [2024-07-15 21:43:32.640167] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:38.687 [2024-07-15 21:43:32.640212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523568 ] 00:05:38.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.687 [2024-07-15 21:43:32.716889] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.687 [2024-07-15 21:43:32.716914] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.687 [2024-07-15 21:43:32.870478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.254 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.254 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:39.254 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3523394 00:05:39.254 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3523394 00:05:39.254 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.512 lslocks: write error 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3523394 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3523394 ']' 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3523394 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3523394 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3523394' 00:05:39.512 killing process with pid 3523394 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3523394 00:05:39.512 21:43:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3523394 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3523568 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3523568 ']' 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3523568 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3523568 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3523568' 00:05:40.449 killing process with pid 3523568 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3523568 00:05:40.449 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3523568 00:05:40.706 00:05:40.707 real 0m2.961s 00:05:40.707 user 0m3.175s 00:05:40.707 sys 0m0.804s 00:05:40.707 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.707 21:43:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.707 ************************************ 00:05:40.707 END TEST non_locking_app_on_locked_coremask 00:05:40.707 ************************************ 00:05:40.707 21:43:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.707 21:43:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.707 21:43:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.707 21:43:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.707 21:43:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.707 ************************************ 00:05:40.707 START TEST locking_app_on_unlocked_coremask 00:05:40.707 ************************************ 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3524052 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3524052 /var/tmp/spdk.sock 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3524052 ']' 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.707 21:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.707 [2024-07-15 21:43:34.843865] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:40.707 [2024-07-15 21:43:34.843906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524052 ] 00:05:40.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.707 [2024-07-15 21:43:34.896985] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.707 [2024-07-15 21:43:34.897010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.971 [2024-07-15 21:43:34.969142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3524070 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3524070 /var/tmp/spdk2.sock 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3524070 ']' 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.541 21:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.541 [2024-07-15 21:43:35.673899] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:41.541 [2024-07-15 21:43:35.673948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524070 ] 00:05:41.541 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.541 [2024-07-15 21:43:35.752090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.800 [2024-07-15 21:43:35.904147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.367 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.367 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:42.367 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3524070 00:05:42.367 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3524070 00:05:42.367 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.935 lslocks: write error 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3524052 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3524052 ']' 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3524052 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3524052 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3524052' 00:05:42.935 killing process with pid 3524052 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3524052 00:05:42.935 21:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3524052 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3524070 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3524070 ']' 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3524070 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3524070 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3524070' 00:05:43.640 killing process with pid 3524070 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3524070 00:05:43.640 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3524070 00:05:43.919 00:05:43.919 real 0m3.126s 00:05:43.919 user 0m3.334s 00:05:43.919 sys 0m0.893s 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.919 ************************************ 00:05:43.919 END TEST locking_app_on_unlocked_coremask 00:05:43.919 ************************************ 00:05:43.919 21:43:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.919 21:43:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.919 21:43:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.919 21:43:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.919 21:43:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.919 ************************************ 00:05:43.919 START TEST locking_app_on_locked_coremask 00:05:43.919 ************************************ 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3524562 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3524562 /var/tmp/spdk.sock 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3524562 ']' 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.919 21:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.919 [2024-07-15 21:43:38.033719] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:43.919 [2024-07-15 21:43:38.033761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524562 ] 00:05:43.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.919 [2024-07-15 21:43:38.087237] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.919 [2024-07-15 21:43:38.154802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3524768 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3524768 /var/tmp/spdk2.sock 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3524768 /var/tmp/spdk2.sock 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3524768 /var/tmp/spdk2.sock 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3524768 ']' 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.857 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.858 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.858 21:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.858 [2024-07-15 21:43:38.880914] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:44.858 [2024-07-15 21:43:38.880967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524768 ] 00:05:44.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.858 [2024-07-15 21:43:38.958691] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3524562 has claimed it. 00:05:44.858 [2024-07-15 21:43:38.958729] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3524768) - No such process 00:05:45.426 ERROR: process (pid: 3524768) is no longer running 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3524562 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3524562 00:05:45.426 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.686 lslocks: write error 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3524562 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3524562 ']' 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3524562 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3524562 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3524562' 00:05:45.686 killing process with pid 3524562 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3524562 00:05:45.686 21:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3524562 00:05:45.945 00:05:45.945 real 0m2.118s 00:05:45.945 user 0m2.350s 00:05:45.945 sys 0m0.550s 00:05:45.945 21:43:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.945 21:43:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.945 ************************************ 00:05:45.945 END TEST locking_app_on_locked_coremask 00:05:45.945 ************************************ 00:05:45.945 21:43:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.945 21:43:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:45.945 21:43:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.945 21:43:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.945 21:43:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.945 ************************************ 00:05:45.945 START TEST locking_overlapped_coremask 00:05:45.945 ************************************ 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3525004 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3525004 /var/tmp/spdk.sock 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3525004 ']' 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.945 21:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.205 [2024-07-15 21:43:40.218485] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:46.205 [2024-07-15 21:43:40.218524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525004 ] 00:05:46.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.205 [2024-07-15 21:43:40.273072] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.205 [2024-07-15 21:43:40.354343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.205 [2024-07-15 21:43:40.354440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.205 [2024-07-15 21:43:40.354440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3525066 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3525066 /var/tmp/spdk2.sock 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3525066 /var/tmp/spdk2.sock 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3525066 /var/tmp/spdk2.sock 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3525066 ']' 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.142 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.142 [2024-07-15 21:43:41.076546] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:47.142 [2024-07-15 21:43:41.076595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525066 ] 00:05:47.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.143 [2024-07-15 21:43:41.155409] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3525004 has claimed it. 00:05:47.143 [2024-07-15 21:43:41.155445] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3525066) - No such process 00:05:47.711 ERROR: process (pid: 3525066) is no longer running 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3525004 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3525004 ']' 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3525004 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525004 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525004' 00:05:47.711 killing process with pid 3525004 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3525004 00:05:47.711 21:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3525004 00:05:47.971 00:05:47.971 real 0m1.899s 00:05:47.971 user 0m5.380s 00:05:47.971 sys 0m0.402s 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 ************************************ 00:05:47.971 END TEST locking_overlapped_coremask 00:05:47.971 ************************************ 00:05:47.971 21:43:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.971 21:43:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:47.971 21:43:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.971 21:43:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.971 21:43:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 ************************************ 00:05:47.971 START TEST locking_overlapped_coremask_via_rpc 00:05:47.971 ************************************ 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3525322 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3525322 /var/tmp/spdk.sock 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3525322 ']' 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.971 21:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 [2024-07-15 21:43:42.184723] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:47.971 [2024-07-15 21:43:42.184768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525322 ] 00:05:47.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.230 [2024-07-15 21:43:42.237729] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.230 [2024-07-15 21:43:42.237754] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.230 [2024-07-15 21:43:42.313275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.230 [2024-07-15 21:43:42.313370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.230 [2024-07-15 21:43:42.313372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3525550 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3525550 /var/tmp/spdk2.sock 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3525550 ']' 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.799 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.057 [2024-07-15 21:43:43.050910] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:49.057 [2024-07-15 21:43:43.050961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525550 ] 00:05:49.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.057 [2024-07-15 21:43:43.126923] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.057 [2024-07-15 21:43:43.126953] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.057 [2024-07-15 21:43:43.278390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.057 [2024-07-15 21:43:43.282271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.057 [2024-07-15 21:43:43.282271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:49.624 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.624 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.624 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.624 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.624 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.882 [2024-07-15 21:43:43.882303] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3525322 has claimed it. 00:05:49.882 request: 00:05:49.882 { 00:05:49.882 "method": "framework_enable_cpumask_locks", 00:05:49.882 "req_id": 1 00:05:49.882 } 00:05:49.882 Got JSON-RPC error response 00:05:49.882 response: 00:05:49.882 { 00:05:49.882 "code": -32603, 00:05:49.882 "message": "Failed to claim CPU core: 2" 00:05:49.882 } 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3525322 /var/tmp/spdk.sock 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3525322 ']' 00:05:49.882 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.883 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.883 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.883 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.883 21:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3525550 /var/tmp/spdk2.sock 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3525550 ']' 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.883 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.140 00:05:50.140 real 0m2.131s 00:05:50.140 user 0m0.889s 00:05:50.140 sys 0m0.165s 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.140 21:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.140 ************************************ 00:05:50.140 END TEST locking_overlapped_coremask_via_rpc 00:05:50.140 ************************************ 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.140 21:43:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:50.140 21:43:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3525322 ]] 00:05:50.140 21:43:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3525322 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3525322 ']' 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3525322 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525322 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525322' 00:05:50.140 killing process with pid 3525322 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3525322 00:05:50.140 21:43:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3525322 00:05:50.707 21:43:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3525550 ]] 00:05:50.707 21:43:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3525550 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3525550 ']' 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3525550 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525550 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525550' 00:05:50.707 killing process with pid 3525550 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3525550 00:05:50.707 21:43:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3525550 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3525322 ]] 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3525322 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3525322 ']' 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3525322 00:05:50.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3525322) - No such process 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3525322 is not found' 00:05:50.966 Process with pid 3525322 is not found 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3525550 ]] 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3525550 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3525550 ']' 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3525550 00:05:50.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3525550) - No such process 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3525550 is not found' 00:05:50.966 Process with pid 3525550 is not found 00:05:50.966 21:43:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.966 00:05:50.966 real 0m16.495s 00:05:50.966 user 0m28.884s 00:05:50.966 sys 0m4.655s 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.966 21:43:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.966 ************************************ 00:05:50.966 END TEST cpu_locks 00:05:50.966 ************************************ 00:05:50.966 21:43:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.966 00:05:50.966 real 0m41.612s 00:05:50.966 user 1m19.798s 00:05:50.966 sys 0m7.950s 00:05:50.966 21:43:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.966 21:43:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.967 ************************************ 00:05:50.967 END TEST event 00:05:50.967 ************************************ 00:05:50.967 21:43:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.967 21:43:45 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:50.967 21:43:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.967 21:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.967 21:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.967 ************************************ 00:05:50.967 START TEST thread 00:05:50.967 ************************************ 00:05:50.967 21:43:45 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:50.967 * Looking for test storage... 00:05:51.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:51.225 21:43:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.225 21:43:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:51.225 21:43:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.225 21:43:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 START TEST thread_poller_perf 00:05:51.225 ************************************ 00:05:51.225 21:43:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.225 [2024-07-15 21:43:45.270456] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:51.225 [2024-07-15 21:43:45.270523] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525892 ] 00:05:51.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.225 [2024-07-15 21:43:45.329607] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.225 [2024-07-15 21:43:45.403283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.225 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.600 ====================================== 00:05:52.600 busy:2305578094 (cyc) 00:05:52.600 total_run_count: 403000 00:05:52.600 tsc_hz: 2300000000 (cyc) 00:05:52.600 ====================================== 00:05:52.600 poller_cost: 5721 (cyc), 2487 (nsec) 00:05:52.600 00:05:52.600 real 0m1.231s 00:05:52.600 user 0m1.157s 00:05:52.600 sys 0m0.071s 00:05:52.600 21:43:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.600 21:43:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.600 ************************************ 00:05:52.600 END TEST thread_poller_perf 00:05:52.600 ************************************ 00:05:52.600 21:43:46 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:52.600 21:43:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.600 21:43:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:52.600 21:43:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.600 21:43:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.600 ************************************ 00:05:52.600 START TEST thread_poller_perf 00:05:52.600 ************************************ 00:05:52.600 21:43:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.600 [2024-07-15 21:43:46.568387] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:52.600 [2024-07-15 21:43:46.568458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526144 ] 00:05:52.601 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.601 [2024-07-15 21:43:46.625976] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.601 [2024-07-15 21:43:46.697053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.601 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.534 ====================================== 00:05:53.534 busy:2301721764 (cyc) 00:05:53.534 total_run_count: 5335000 00:05:53.534 tsc_hz: 2300000000 (cyc) 00:05:53.534 ====================================== 00:05:53.534 poller_cost: 431 (cyc), 187 (nsec) 00:05:53.534 00:05:53.534 real 0m1.222s 00:05:53.534 user 0m1.141s 00:05:53.534 sys 0m0.077s 00:05:53.534 21:43:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.534 21:43:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.534 ************************************ 00:05:53.534 END TEST thread_poller_perf 00:05:53.534 ************************************ 00:05:53.792 21:43:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:53.792 21:43:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.792 00:05:53.792 real 0m2.678s 00:05:53.792 user 0m2.389s 00:05:53.792 sys 0m0.298s 00:05:53.792 21:43:47 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.793 21:43:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 ************************************ 00:05:53.793 END TEST thread 00:05:53.793 ************************************ 00:05:53.793 21:43:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.793 21:43:47 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:53.793 21:43:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.793 21:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.793 21:43:47 -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 ************************************ 00:05:53.793 START TEST accel 00:05:53.793 ************************************ 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:53.793 * Looking for test storage... 00:05:53.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:53.793 21:43:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:53.793 21:43:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:53.793 21:43:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.793 21:43:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3526436 00:05:53.793 21:43:47 accel -- accel/accel.sh@63 -- # waitforlisten 3526436 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@829 -- # '[' -z 3526436 ']' 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.793 21:43:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 21:43:47 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:53.793 21:43:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:53.793 21:43:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.793 21:43:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.793 21:43:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.793 21:43:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.793 21:43:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.793 21:43:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:53.793 21:43:47 accel -- accel/accel.sh@41 -- # jq -r . 00:05:53.793 [2024-07-15 21:43:48.008488] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:53.793 [2024-07-15 21:43:48.008533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526436 ] 00:05:53.793 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.051 [2024-07-15 21:43:48.064516] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.051 [2024-07-15 21:43:48.144189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.617 21:43:48 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.617 21:43:48 accel -- common/autotest_common.sh@862 -- # return 0 00:05:54.617 21:43:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:54.618 21:43:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:54.618 21:43:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:54.618 21:43:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:54.618 21:43:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:54.618 21:43:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:54.618 21:43:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:54.618 21:43:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.618 21:43:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.618 21:43:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.618 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.618 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.618 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.877 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.877 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.877 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.877 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.877 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.877 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.877 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.877 21:43:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:54.877 21:43:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:54.877 21:43:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:54.877 21:43:48 accel -- accel/accel.sh@75 -- # killprocess 3526436 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@948 -- # '[' -z 3526436 ']' 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@952 -- # kill -0 3526436 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@953 -- # uname 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3526436 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3526436' 00:05:54.877 killing process with pid 3526436 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@967 -- # kill 3526436 00:05:54.877 21:43:48 accel -- common/autotest_common.sh@972 -- # wait 3526436 00:05:55.135 21:43:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:55.135 21:43:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 21:43:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:55.135 21:43:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:55.135 21:43:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.135 21:43:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.135 21:43:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.135 21:43:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 ************************************ 00:05:55.135 START TEST accel_missing_filename 00:05:55.135 ************************************ 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.135 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:55.135 21:43:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:55.135 [2024-07-15 21:43:49.358717] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:55.135 [2024-07-15 21:43:49.358773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526702 ] 00:05:55.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.394 [2024-07-15 21:43:49.415744] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.394 [2024-07-15 21:43:49.487352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.394 [2024-07-15 21:43:49.528410] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.394 [2024-07-15 21:43:49.588518] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:55.653 A filename is required. 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.653 00:05:55.653 real 0m0.327s 00:05:55.653 user 0m0.245s 00:05:55.653 sys 0m0.117s 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.653 21:43:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:55.653 ************************************ 00:05:55.653 END TEST accel_missing_filename 00:05:55.653 ************************************ 00:05:55.653 21:43:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.653 21:43:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.653 21:43:49 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:55.653 21:43:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.653 21:43:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.653 ************************************ 00:05:55.653 START TEST accel_compress_verify 00:05:55.653 ************************************ 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.653 21:43:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:55.653 21:43:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:55.653 [2024-07-15 21:43:49.748083] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:55.653 [2024-07-15 21:43:49.748149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526840 ] 00:05:55.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.653 [2024-07-15 21:43:49.804893] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.653 [2024-07-15 21:43:49.880752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.913 [2024-07-15 21:43:49.921313] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.913 [2024-07-15 21:43:49.980548] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:55.913 00:05:55.913 Compression does not support the verify option, aborting. 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.913 00:05:55.913 real 0m0.331s 00:05:55.913 user 0m0.256s 00:05:55.913 sys 0m0.113s 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.913 21:43:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:55.913 ************************************ 00:05:55.913 END TEST accel_compress_verify 00:05:55.913 ************************************ 00:05:55.913 21:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.913 21:43:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:55.913 21:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.913 21:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.913 21:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.913 ************************************ 00:05:55.913 START TEST accel_wrong_workload 00:05:55.913 ************************************ 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:55.913 21:43:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:55.913 Unsupported workload type: foobar 00:05:55.913 [2024-07-15 21:43:50.146789] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:55.913 accel_perf options: 00:05:55.913 [-h help message] 00:05:55.913 [-q queue depth per core] 00:05:55.913 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.913 [-T number of threads per core 00:05:55.913 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.913 [-t time in seconds] 00:05:55.913 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.913 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:55.913 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.913 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.913 [-S for crc32c workload, use this seed value (default 0) 00:05:55.913 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.913 [-f for fill workload, use this BYTE value (default 255) 00:05:55.913 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.913 [-y verify result if this switch is on] 00:05:55.913 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.913 Can be used to spread operations across a wider range of memory. 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.913 00:05:55.913 real 0m0.033s 00:05:55.913 user 0m0.017s 00:05:55.913 sys 0m0.015s 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.913 21:43:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:55.913 ************************************ 00:05:55.913 END TEST accel_wrong_workload 00:05:55.913 ************************************ 00:05:56.173 Error: writing output failed: Broken pipe 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.173 21:43:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 ************************************ 00:05:56.173 START TEST accel_negative_buffers 00:05:56.173 ************************************ 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:56.173 21:43:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:56.173 -x option must be non-negative. 00:05:56.173 [2024-07-15 21:43:50.235057] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:56.173 accel_perf options: 00:05:56.173 [-h help message] 00:05:56.173 [-q queue depth per core] 00:05:56.173 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:56.173 [-T number of threads per core 00:05:56.173 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:56.173 [-t time in seconds] 00:05:56.173 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:56.173 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:56.173 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:56.173 [-l for compress/decompress workloads, name of uncompressed input file 00:05:56.173 [-S for crc32c workload, use this seed value (default 0) 00:05:56.173 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:56.173 [-f for fill workload, use this BYTE value (default 255) 00:05:56.173 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:56.173 [-y verify result if this switch is on] 00:05:56.173 [-a tasks to allocate per core (default: same value as -q)] 00:05:56.173 Can be used to spread operations across a wider range of memory. 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.173 00:05:56.173 real 0m0.029s 00:05:56.173 user 0m0.015s 00:05:56.173 sys 0m0.013s 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.173 21:43:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 ************************************ 00:05:56.173 END TEST accel_negative_buffers 00:05:56.173 ************************************ 00:05:56.173 Error: writing output failed: Broken pipe 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.173 21:43:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.173 21:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 ************************************ 00:05:56.173 START TEST accel_crc32c 00:05:56.173 ************************************ 00:05:56.173 21:43:50 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:56.173 21:43:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:56.173 [2024-07-15 21:43:50.332097] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:56.174 [2024-07-15 21:43:50.332147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527007 ] 00:05:56.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.174 [2024-07-15 21:43:50.391136] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.488 [2024-07-15 21:43:50.470122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.488 21:43:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.430 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:57.431 21:43:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.431 00:05:57.431 real 0m1.340s 00:05:57.431 user 0m1.228s 00:05:57.431 sys 0m0.114s 00:05:57.431 21:43:51 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.431 21:43:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:57.431 ************************************ 00:05:57.431 END TEST accel_crc32c 00:05:57.431 ************************************ 00:05:57.689 21:43:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.689 21:43:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:57.689 21:43:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:57.689 21:43:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.689 21:43:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.689 ************************************ 00:05:57.689 START TEST accel_crc32c_C2 00:05:57.689 ************************************ 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.689 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:57.690 [2024-07-15 21:43:51.726299] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:57.690 [2024-07-15 21:43:51.726352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527255 ] 00:05:57.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.690 [2024-07-15 21:43:51.782671] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.690 [2024-07-15 21:43:51.854252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.690 21:43:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.066 00:05:59.066 real 0m1.326s 00:05:59.066 user 0m1.216s 00:05:59.066 sys 0m0.112s 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.066 21:43:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:59.066 ************************************ 00:05:59.066 END TEST accel_crc32c_C2 00:05:59.066 ************************************ 00:05:59.066 21:43:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.066 21:43:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:59.066 21:43:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.066 21:43:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.066 21:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.066 ************************************ 00:05:59.066 START TEST accel_copy 00:05:59.066 ************************************ 00:05:59.066 21:43:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:59.066 [2024-07-15 21:43:53.120674] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:05:59.066 [2024-07-15 21:43:53.120727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527509 ] 00:05:59.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.066 [2024-07-15 21:43:53.176002] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.066 [2024-07-15 21:43:53.245698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.066 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.067 21:43:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:00.440 21:43:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.440 00:06:00.440 real 0m1.327s 00:06:00.440 user 0m1.219s 00:06:00.440 sys 0m0.110s 00:06:00.440 21:43:54 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.440 21:43:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.440 ************************************ 00:06:00.440 END TEST accel_copy 00:06:00.440 ************************************ 00:06:00.440 21:43:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.440 21:43:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.440 21:43:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:00.440 21:43:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.440 21:43:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.440 ************************************ 00:06:00.440 START TEST accel_fill 00:06:00.440 ************************************ 00:06:00.440 21:43:54 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:00.440 [2024-07-15 21:43:54.504488] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:00.440 [2024-07-15 21:43:54.504539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527754 ] 00:06:00.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.440 [2024-07-15 21:43:54.558844] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.440 [2024-07-15 21:43:54.630272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.440 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.441 21:43:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:01.815 21:43:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.815 00:06:01.815 real 0m1.325s 00:06:01.815 user 0m1.219s 00:06:01.815 sys 0m0.107s 00:06:01.815 21:43:55 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.815 21:43:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:01.815 ************************************ 00:06:01.815 END TEST accel_fill 00:06:01.816 ************************************ 00:06:01.816 21:43:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.816 21:43:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:01.816 21:43:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.816 21:43:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.816 21:43:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.816 ************************************ 00:06:01.816 START TEST accel_copy_crc32c 00:06:01.816 ************************************ 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:01.816 21:43:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:01.816 [2024-07-15 21:43:55.887112] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:01.816 [2024-07-15 21:43:55.887161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528001 ] 00:06:01.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.816 [2024-07-15 21:43:55.941380] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.816 [2024-07-15 21:43:56.012354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.816 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.074 21:43:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.005 00:06:03.005 real 0m1.324s 00:06:03.005 user 0m1.222s 00:06:03.005 sys 0m0.104s 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.005 21:43:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:03.005 ************************************ 00:06:03.005 END TEST accel_copy_crc32c 00:06:03.005 ************************************ 00:06:03.005 21:43:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.005 21:43:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.005 21:43:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:03.005 21:43:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.005 21:43:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.005 ************************************ 00:06:03.005 START TEST accel_copy_crc32c_C2 00:06:03.005 ************************************ 00:06:03.005 21:43:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.005 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.005 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:03.263 [2024-07-15 21:43:57.269574] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:03.263 [2024-07-15 21:43:57.269620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528256 ] 00:06:03.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.263 [2024-07-15 21:43:57.323741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.263 [2024-07-15 21:43:57.395383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.263 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.264 21:43:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.634 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.635 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:04.635 21:43:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.635 00:06:04.635 real 0m1.324s 00:06:04.635 user 0m1.215s 00:06:04.635 sys 0m0.112s 00:06:04.635 21:43:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.635 21:43:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 ************************************ 00:06:04.635 END TEST accel_copy_crc32c_C2 00:06:04.635 ************************************ 00:06:04.635 21:43:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.635 21:43:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:04.635 21:43:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.635 21:43:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.635 21:43:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 ************************************ 00:06:04.635 START TEST accel_dualcast 00:06:04.635 ************************************ 00:06:04.635 21:43:58 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:04.635 [2024-07-15 21:43:58.653717] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:04.635 [2024-07-15 21:43:58.653764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528507 ] 00:06:04.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.635 [2024-07-15 21:43:58.708236] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.635 [2024-07-15 21:43:58.779402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:04.635 21:43:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:06.008 21:43:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.008 00:06:06.008 real 0m1.326s 00:06:06.008 user 0m1.215s 00:06:06.008 sys 0m0.112s 00:06:06.008 21:43:59 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.008 21:43:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:06.008 ************************************ 00:06:06.008 END TEST accel_dualcast 00:06:06.008 ************************************ 00:06:06.008 21:43:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.008 21:43:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:06.008 21:43:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.008 21:43:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.008 21:43:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.008 ************************************ 00:06:06.008 START TEST accel_compare 00:06:06.008 ************************************ 00:06:06.008 21:44:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:06.008 [2024-07-15 21:44:00.039063] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:06.008 [2024-07-15 21:44:00.039112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528755 ] 00:06:06.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.008 [2024-07-15 21:44:00.094357] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.008 [2024-07-15 21:44:00.166387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.008 21:44:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:07.379 21:44:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.379 00:06:07.379 real 0m1.329s 00:06:07.379 user 0m1.212s 00:06:07.379 sys 0m0.119s 00:06:07.379 21:44:01 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.379 21:44:01 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:07.379 ************************************ 00:06:07.379 END TEST accel_compare 00:06:07.379 ************************************ 00:06:07.379 21:44:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.379 21:44:01 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:07.379 21:44:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.379 21:44:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.379 21:44:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.379 ************************************ 00:06:07.379 START TEST accel_xor 00:06:07.379 ************************************ 00:06:07.379 21:44:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:07.379 [2024-07-15 21:44:01.423895] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:07.379 [2024-07-15 21:44:01.423961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529005 ] 00:06:07.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.379 [2024-07-15 21:44:01.478602] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.379 [2024-07-15 21:44:01.549826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.379 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.380 21:44:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.752 00:06:08.752 real 0m1.327s 00:06:08.752 user 0m1.223s 00:06:08.752 sys 0m0.107s 00:06:08.752 21:44:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.752 21:44:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:08.752 ************************************ 00:06:08.752 END TEST accel_xor 00:06:08.752 ************************************ 00:06:08.752 21:44:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.752 21:44:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:08.752 21:44:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.752 21:44:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.752 21:44:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.752 ************************************ 00:06:08.752 START TEST accel_xor 00:06:08.752 ************************************ 00:06:08.752 21:44:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:08.752 [2024-07-15 21:44:02.808486] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:08.752 [2024-07-15 21:44:02.808551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529252 ] 00:06:08.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.752 [2024-07-15 21:44:02.863865] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.752 [2024-07-15 21:44:02.935577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.752 21:44:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:10.129 21:44:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.129 00:06:10.129 real 0m1.329s 00:06:10.129 user 0m1.222s 00:06:10.129 sys 0m0.108s 00:06:10.129 21:44:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.129 21:44:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:10.129 ************************************ 00:06:10.129 END TEST accel_xor 00:06:10.129 ************************************ 00:06:10.129 21:44:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.129 21:44:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:10.129 21:44:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:10.129 21:44:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.129 21:44:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.129 ************************************ 00:06:10.129 START TEST accel_dif_verify 00:06:10.129 ************************************ 00:06:10.129 21:44:04 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:10.129 [2024-07-15 21:44:04.194700] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:10.129 [2024-07-15 21:44:04.194746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529506 ] 00:06:10.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.129 [2024-07-15 21:44:04.249540] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.129 [2024-07-15 21:44:04.320929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.129 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.130 21:44:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:11.507 21:44:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.507 00:06:11.507 real 0m1.327s 00:06:11.507 user 0m1.214s 00:06:11.507 sys 0m0.116s 00:06:11.507 21:44:05 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.507 21:44:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 ************************************ 00:06:11.507 END TEST accel_dif_verify 00:06:11.507 ************************************ 00:06:11.507 21:44:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.507 21:44:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:11.507 21:44:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:11.507 21:44:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.507 21:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 ************************************ 00:06:11.507 START TEST accel_dif_generate 00:06:11.507 ************************************ 00:06:11.507 21:44:05 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:11.507 [2024-07-15 21:44:05.579661] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:11.507 [2024-07-15 21:44:05.579707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529751 ] 00:06:11.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.507 [2024-07-15 21:44:05.634127] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.507 [2024-07-15 21:44:05.705359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.507 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.767 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.768 21:44:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:12.706 21:44:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.706 00:06:12.706 real 0m1.327s 00:06:12.706 user 0m1.223s 00:06:12.706 sys 0m0.107s 00:06:12.706 21:44:06 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.706 21:44:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:12.706 ************************************ 00:06:12.706 END TEST accel_dif_generate 00:06:12.706 ************************************ 00:06:12.706 21:44:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.706 21:44:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:12.706 21:44:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:12.706 21:44:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.706 21:44:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.706 ************************************ 00:06:12.706 START TEST accel_dif_generate_copy 00:06:12.706 ************************************ 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:12.706 21:44:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:12.966 [2024-07-15 21:44:06.964580] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:12.966 [2024-07-15 21:44:06.964627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529998 ] 00:06:12.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.966 [2024-07-15 21:44:07.018873] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.966 [2024-07-15 21:44:07.090115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.966 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.967 21:44:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.346 00:06:14.346 real 0m1.328s 00:06:14.346 user 0m1.222s 00:06:14.346 sys 0m0.108s 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.346 21:44:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.346 ************************************ 00:06:14.346 END TEST accel_dif_generate_copy 00:06:14.346 ************************************ 00:06:14.346 21:44:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.346 21:44:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:14.346 21:44:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.346 21:44:08 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.346 21:44:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.346 21:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.346 ************************************ 00:06:14.346 START TEST accel_comp 00:06:14.346 ************************************ 00:06:14.346 21:44:08 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:14.346 [2024-07-15 21:44:08.349674] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:14.346 [2024-07-15 21:44:08.349723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530252 ] 00:06:14.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.346 [2024-07-15 21:44:08.404219] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.346 [2024-07-15 21:44:08.475175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.346 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.347 21:44:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:15.727 21:44:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.727 00:06:15.727 real 0m1.328s 00:06:15.727 user 0m1.226s 00:06:15.727 sys 0m0.104s 00:06:15.728 21:44:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.728 21:44:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:15.728 ************************************ 00:06:15.728 END TEST accel_comp 00:06:15.728 ************************************ 00:06:15.728 21:44:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.728 21:44:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.728 21:44:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.728 21:44:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.728 21:44:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.728 ************************************ 00:06:15.728 START TEST accel_decomp 00:06:15.728 ************************************ 00:06:15.728 21:44:09 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:15.728 [2024-07-15 21:44:09.734363] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:15.728 [2024-07-15 21:44:09.734409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530503 ] 00:06:15.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.728 [2024-07-15 21:44:09.789146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.728 [2024-07-15 21:44:09.860353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.728 21:44:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.109 21:44:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.109 00:06:17.109 real 0m1.327s 00:06:17.109 user 0m1.215s 00:06:17.109 sys 0m0.114s 00:06:17.109 21:44:11 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.109 21:44:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:17.109 ************************************ 00:06:17.109 END TEST accel_decomp 00:06:17.109 ************************************ 00:06:17.109 21:44:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.109 21:44:11 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:17.109 21:44:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:17.109 21:44:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.109 21:44:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.109 ************************************ 00:06:17.109 START TEST accel_decomp_full 00:06:17.109 ************************************ 00:06:17.109 21:44:11 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:17.109 [2024-07-15 21:44:11.120078] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:17.109 [2024-07-15 21:44:11.120123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530749 ] 00:06:17.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.109 [2024-07-15 21:44:11.174637] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.109 [2024-07-15 21:44:11.246708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.110 21:44:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.490 21:44:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.490 00:06:18.490 real 0m1.335s 00:06:18.490 user 0m1.225s 00:06:18.490 sys 0m0.113s 00:06:18.490 21:44:12 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.490 21:44:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:18.490 ************************************ 00:06:18.490 END TEST accel_decomp_full 00:06:18.490 ************************************ 00:06:18.490 21:44:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.490 21:44:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:18.490 21:44:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:18.490 21:44:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.490 21:44:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.490 ************************************ 00:06:18.490 START TEST accel_decomp_mcore 00:06:18.490 ************************************ 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:18.490 [2024-07-15 21:44:12.514771] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:18.490 [2024-07-15 21:44:12.514846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531000 ] 00:06:18.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.490 [2024-07-15 21:44:12.570714] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.490 [2024-07-15 21:44:12.645264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.490 [2024-07-15 21:44:12.645362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.490 [2024-07-15 21:44:12.645440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.490 [2024-07-15 21:44:12.645441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.490 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.491 21:44:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.933 00:06:19.933 real 0m1.350s 00:06:19.933 user 0m4.580s 00:06:19.933 sys 0m0.117s 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.933 21:44:13 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:19.933 ************************************ 00:06:19.933 END TEST accel_decomp_mcore 00:06:19.933 ************************************ 00:06:19.933 21:44:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.933 21:44:13 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:19.933 21:44:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.933 21:44:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.933 21:44:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.933 ************************************ 00:06:19.933 START TEST accel_decomp_full_mcore 00:06:19.933 ************************************ 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:19.933 21:44:13 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:19.933 [2024-07-15 21:44:13.935878] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:19.933 [2024-07-15 21:44:13.935932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531254 ] 00:06:19.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.933 [2024-07-15 21:44:13.994057] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.933 [2024-07-15 21:44:14.067349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.933 [2024-07-15 21:44:14.067448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.933 [2024-07-15 21:44:14.067534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.933 [2024-07-15 21:44:14.067536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.933 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 21:44:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.313 00:06:21.313 real 0m1.361s 00:06:21.313 user 0m4.604s 00:06:21.313 sys 0m0.127s 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.313 21:44:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:21.313 ************************************ 00:06:21.313 END TEST accel_decomp_full_mcore 00:06:21.313 ************************************ 00:06:21.313 21:44:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.313 21:44:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:21.313 21:44:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:21.313 21:44:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.313 21:44:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.313 ************************************ 00:06:21.313 START TEST accel_decomp_mthread 00:06:21.313 ************************************ 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:21.313 [2024-07-15 21:44:15.362495] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:21.313 [2024-07-15 21:44:15.362563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531509 ] 00:06:21.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.313 [2024-07-15 21:44:15.419727] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.313 [2024-07-15 21:44:15.492199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.313 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.314 21:44:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.692 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.693 00:06:22.693 real 0m1.343s 00:06:22.693 user 0m1.242s 00:06:22.693 sys 0m0.115s 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.693 21:44:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:22.693 ************************************ 00:06:22.693 END TEST accel_decomp_mthread 00:06:22.693 ************************************ 00:06:22.693 21:44:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.693 21:44:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:22.693 21:44:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:22.693 21:44:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.693 21:44:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.693 ************************************ 00:06:22.693 START TEST accel_decomp_full_mthread 00:06:22.693 ************************************ 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:22.693 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:22.693 [2024-07-15 21:44:16.770730] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:22.693 [2024-07-15 21:44:16.770780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531756 ] 00:06:22.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.693 [2024-07-15 21:44:16.826271] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.693 [2024-07-15 21:44:16.900145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:22.951 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.952 21:44:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.887 00:06:23.887 real 0m1.366s 00:06:23.887 user 0m1.263s 00:06:23.887 sys 0m0.116s 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.887 21:44:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:23.887 ************************************ 00:06:23.887 END TEST accel_decomp_full_mthread 00:06:23.887 ************************************ 00:06:24.145 21:44:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.145 21:44:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:24.145 21:44:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:24.145 21:44:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:24.145 21:44:18 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.145 21:44:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.145 21:44:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.145 21:44:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.145 21:44:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.145 21:44:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.145 21:44:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.145 21:44:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.145 21:44:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:24.145 21:44:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:24.145 ************************************ 00:06:24.145 START TEST accel_dif_functional_tests 00:06:24.145 ************************************ 00:06:24.145 21:44:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:24.145 [2024-07-15 21:44:18.221527] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:24.145 [2024-07-15 21:44:18.221560] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532005 ] 00:06:24.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.145 [2024-07-15 21:44:18.273691] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.145 [2024-07-15 21:44:18.348102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.145 [2024-07-15 21:44:18.348119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.145 [2024-07-15 21:44:18.348121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.404 00:06:24.404 00:06:24.404 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.404 http://cunit.sourceforge.net/ 00:06:24.404 00:06:24.404 00:06:24.404 Suite: accel_dif 00:06:24.404 Test: verify: DIF generated, GUARD check ...passed 00:06:24.404 Test: verify: DIF generated, APPTAG check ...passed 00:06:24.404 Test: verify: DIF generated, REFTAG check ...passed 00:06:24.404 Test: verify: DIF not generated, GUARD check ...[2024-07-15 21:44:18.416611] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:24.404 passed 00:06:24.404 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 21:44:18.416662] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:24.404 passed 00:06:24.404 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 21:44:18.416696] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:24.404 passed 00:06:24.404 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:24.404 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 21:44:18.416739] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:24.404 passed 00:06:24.404 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:24.404 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:24.404 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:24.404 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 21:44:18.416839] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:24.404 passed 00:06:24.404 Test: verify copy: DIF generated, GUARD check ...passed 00:06:24.404 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:24.404 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:24.404 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 21:44:18.416951] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:24.404 passed 00:06:24.404 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 21:44:18.416971] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:24.404 passed 00:06:24.404 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 21:44:18.416990] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:24.404 passed 00:06:24.404 Test: generate copy: DIF generated, GUARD check ...passed 00:06:24.404 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:24.404 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:24.404 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:24.404 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:24.404 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:24.404 Test: generate copy: iovecs-len validate ...[2024-07-15 21:44:18.417160] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:24.404 passed 00:06:24.404 Test: generate copy: buffer alignment validate ...passed 00:06:24.404 00:06:24.404 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.404 suites 1 1 n/a 0 0 00:06:24.404 tests 26 26 26 0 0 00:06:24.404 asserts 115 115 115 0 n/a 00:06:24.404 00:06:24.404 Elapsed time = 0.000 seconds 00:06:24.404 00:06:24.404 real 0m0.405s 00:06:24.404 user 0m0.621s 00:06:24.404 sys 0m0.137s 00:06:24.404 21:44:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.404 21:44:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:24.404 ************************************ 00:06:24.404 END TEST accel_dif_functional_tests 00:06:24.404 ************************************ 00:06:24.404 21:44:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.404 00:06:24.404 real 0m30.746s 00:06:24.404 user 0m34.513s 00:06:24.404 sys 0m4.074s 00:06:24.404 21:44:18 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.404 21:44:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.404 ************************************ 00:06:24.404 END TEST accel 00:06:24.404 ************************************ 00:06:24.662 21:44:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.662 21:44:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:24.662 21:44:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.662 21:44:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.662 21:44:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 ************************************ 00:06:24.662 START TEST accel_rpc 00:06:24.662 ************************************ 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:24.662 * Looking for test storage... 00:06:24.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:24.662 21:44:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.662 21:44:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3532291 00:06:24.662 21:44:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:24.662 21:44:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3532291 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3532291 ']' 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.662 21:44:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 [2024-07-15 21:44:18.824752] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:24.662 [2024-07-15 21:44:18.824799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532291 ] 00:06:24.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.662 [2024-07-15 21:44:18.878192] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.920 [2024-07-15 21:44:18.957532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.487 21:44:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.487 21:44:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.487 21:44:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:25.487 21:44:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:25.487 21:44:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:25.487 21:44:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:25.487 21:44:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:25.487 21:44:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.487 21:44:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.487 21:44:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.487 ************************************ 00:06:25.487 START TEST accel_assign_opcode 00:06:25.487 ************************************ 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.487 [2024-07-15 21:44:19.651593] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.487 [2024-07-15 21:44:19.659605] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.487 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.747 software 00:06:25.747 00:06:25.747 real 0m0.232s 00:06:25.747 user 0m0.046s 00:06:25.747 sys 0m0.006s 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.747 21:44:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.747 ************************************ 00:06:25.747 END TEST accel_assign_opcode 00:06:25.747 ************************************ 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:25.747 21:44:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3532291 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3532291 ']' 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3532291 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3532291 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3532291' 00:06:25.747 killing process with pid 3532291 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 3532291 00:06:25.747 21:44:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 3532291 00:06:26.316 00:06:26.316 real 0m1.576s 00:06:26.316 user 0m1.675s 00:06:26.316 sys 0m0.385s 00:06:26.316 21:44:20 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.316 21:44:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.316 ************************************ 00:06:26.316 END TEST accel_rpc 00:06:26.316 ************************************ 00:06:26.316 21:44:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.316 21:44:20 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.316 21:44:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.316 21:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.316 21:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.316 ************************************ 00:06:26.316 START TEST app_cmdline 00:06:26.316 ************************************ 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.316 * Looking for test storage... 00:06:26.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:26.316 21:44:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.316 21:44:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3532600 00:06:26.316 21:44:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.316 21:44:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3532600 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3532600 ']' 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.316 21:44:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.316 [2024-07-15 21:44:20.458454] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:26.317 [2024-07-15 21:44:20.458500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532600 ] 00:06:26.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.317 [2024-07-15 21:44:20.514082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.576 [2024-07-15 21:44:20.594077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.144 21:44:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.144 21:44:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:27.144 21:44:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:27.404 { 00:06:27.404 "version": "SPDK v24.09-pre git sha1 91f51bb85", 00:06:27.404 "fields": { 00:06:27.404 "major": 24, 00:06:27.404 "minor": 9, 00:06:27.404 "patch": 0, 00:06:27.404 "suffix": "-pre", 00:06:27.404 "commit": "91f51bb85" 00:06:27.404 } 00:06:27.404 } 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.404 21:44:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.404 21:44:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.405 request: 00:06:27.405 { 00:06:27.405 "method": "env_dpdk_get_mem_stats", 00:06:27.405 "req_id": 1 00:06:27.405 } 00:06:27.405 Got JSON-RPC error response 00:06:27.405 response: 00:06:27.405 { 00:06:27.405 "code": -32601, 00:06:27.405 "message": "Method not found" 00:06:27.405 } 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.405 21:44:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3532600 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3532600 ']' 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3532600 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.405 21:44:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3532600 00:06:27.664 21:44:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.664 21:44:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.664 21:44:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3532600' 00:06:27.664 killing process with pid 3532600 00:06:27.664 21:44:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 3532600 00:06:27.664 21:44:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 3532600 00:06:27.922 00:06:27.922 real 0m1.628s 00:06:27.922 user 0m1.952s 00:06:27.922 sys 0m0.381s 00:06:27.922 21:44:21 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.922 21:44:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.922 ************************************ 00:06:27.922 END TEST app_cmdline 00:06:27.922 ************************************ 00:06:27.922 21:44:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.922 21:44:21 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:27.922 21:44:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.922 21:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.922 21:44:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.922 ************************************ 00:06:27.922 START TEST version 00:06:27.922 ************************************ 00:06:27.922 21:44:22 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:27.922 * Looking for test storage... 00:06:27.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:27.922 21:44:22 version -- app/version.sh@17 -- # get_header_version major 00:06:27.922 21:44:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # cut -f2 00:06:27.922 21:44:22 version -- app/version.sh@17 -- # major=24 00:06:27.922 21:44:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.922 21:44:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # cut -f2 00:06:27.922 21:44:22 version -- app/version.sh@18 -- # minor=9 00:06:27.922 21:44:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.922 21:44:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # cut -f2 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.922 21:44:22 version -- app/version.sh@19 -- # patch=0 00:06:27.922 21:44:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.922 21:44:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.922 21:44:22 version -- app/version.sh@14 -- # cut -f2 00:06:27.922 21:44:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.922 21:44:22 version -- app/version.sh@22 -- # version=24.9 00:06:27.922 21:44:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.922 21:44:22 version -- app/version.sh@28 -- # version=24.9rc0 00:06:27.922 21:44:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:27.922 21:44:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.181 21:44:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:28.181 21:44:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:28.181 00:06:28.181 real 0m0.158s 00:06:28.181 user 0m0.083s 00:06:28.181 sys 0m0.107s 00:06:28.181 21:44:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.181 21:44:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 ************************************ 00:06:28.181 END TEST version 00:06:28.181 ************************************ 00:06:28.181 21:44:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.181 21:44:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@198 -- # uname -s 00:06:28.181 21:44:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:28.181 21:44:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:28.181 21:44:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:28.181 21:44:22 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:28.181 21:44:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.181 21:44:22 -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 21:44:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:28.181 21:44:22 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:28.181 21:44:22 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.181 21:44:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.181 21:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.181 21:44:22 -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 ************************************ 00:06:28.181 START TEST nvmf_tcp 00:06:28.181 ************************************ 00:06:28.181 21:44:22 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.181 * Looking for test storage... 00:06:28.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.181 21:44:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.181 21:44:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.181 21:44:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.181 21:44:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.181 21:44:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.182 21:44:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.182 21:44:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.182 21:44:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:28.182 21:44:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:28.182 21:44:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.182 21:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:28.182 21:44:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:28.182 21:44:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.182 21:44:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.182 21:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.441 ************************************ 00:06:28.441 START TEST nvmf_example 00:06:28.441 ************************************ 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:28.441 * Looking for test storage... 00:06:28.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:28.441 21:44:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:33.717 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:33.717 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:33.717 Found net devices under 0000:86:00.0: cvl_0_0 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:33.717 Found net devices under 0000:86:00.1: cvl_0_1 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.717 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.977 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.977 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:33.977 21:44:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:33.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:06:33.977 00:06:33.977 --- 10.0.0.2 ping statistics --- 00:06:33.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.977 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:06:33.977 00:06:33.977 --- 10.0.0.1 ping statistics --- 00:06:33.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.977 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.977 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3536023 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3536023 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3536023 ']' 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.978 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.915 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.915 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:34.915 21:44:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:34.915 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.915 21:44:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:34.915 21:44:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:34.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.125 Initializing NVMe Controllers 00:06:47.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:47.125 Initialization complete. Launching workers. 00:06:47.125 ======================================================== 00:06:47.125 Latency(us) 00:06:47.125 Device Information : IOPS MiB/s Average min max 00:06:47.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17091.18 66.76 3745.42 721.95 16348.61 00:06:47.125 ======================================================== 00:06:47.125 Total : 17091.18 66.76 3745.42 721.95 16348.61 00:06:47.125 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.125 rmmod nvme_tcp 00:06:47.125 rmmod nvme_fabrics 00:06:47.125 rmmod nvme_keyring 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3536023 ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3536023 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3536023 ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3536023 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3536023 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3536023' 00:06:47.125 killing process with pid 3536023 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3536023 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3536023 00:06:47.125 nvmf threads initialize successfully 00:06:47.125 bdev subsystem init successfully 00:06:47.125 created a nvmf target service 00:06:47.125 create targets's poll groups done 00:06:47.125 all subsystems of target started 00:06:47.125 nvmf target is running 00:06:47.125 all subsystems of target stopped 00:06:47.125 destroy targets's poll groups done 00:06:47.125 destroyed the nvmf target service 00:06:47.125 bdev subsystem finish successfully 00:06:47.125 nvmf threads destroy successfully 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.125 21:44:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.696 00:06:47.696 real 0m19.226s 00:06:47.696 user 0m45.783s 00:06:47.696 sys 0m5.551s 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.696 21:44:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.696 ************************************ 00:06:47.696 END TEST nvmf_example 00:06:47.696 ************************************ 00:06:47.696 21:44:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:47.696 21:44:41 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:47.696 21:44:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.696 21:44:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.696 21:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.696 ************************************ 00:06:47.696 START TEST nvmf_filesystem 00:06:47.696 ************************************ 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:47.696 * Looking for test storage... 00:06:47.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:47.696 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:47.697 #define SPDK_CONFIG_H 00:06:47.697 #define SPDK_CONFIG_APPS 1 00:06:47.697 #define SPDK_CONFIG_ARCH native 00:06:47.697 #undef SPDK_CONFIG_ASAN 00:06:47.697 #undef SPDK_CONFIG_AVAHI 00:06:47.697 #undef SPDK_CONFIG_CET 00:06:47.697 #define SPDK_CONFIG_COVERAGE 1 00:06:47.697 #define SPDK_CONFIG_CROSS_PREFIX 00:06:47.697 #undef SPDK_CONFIG_CRYPTO 00:06:47.697 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:47.697 #undef SPDK_CONFIG_CUSTOMOCF 00:06:47.697 #undef SPDK_CONFIG_DAOS 00:06:47.697 #define SPDK_CONFIG_DAOS_DIR 00:06:47.697 #define SPDK_CONFIG_DEBUG 1 00:06:47.697 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:47.697 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:47.697 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:47.697 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:47.697 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:47.697 #undef SPDK_CONFIG_DPDK_UADK 00:06:47.697 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:47.697 #define SPDK_CONFIG_EXAMPLES 1 00:06:47.697 #undef SPDK_CONFIG_FC 00:06:47.697 #define SPDK_CONFIG_FC_PATH 00:06:47.697 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:47.697 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:47.697 #undef SPDK_CONFIG_FUSE 00:06:47.697 #undef SPDK_CONFIG_FUZZER 00:06:47.697 #define SPDK_CONFIG_FUZZER_LIB 00:06:47.697 #undef SPDK_CONFIG_GOLANG 00:06:47.697 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:47.697 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:47.697 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:47.697 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:47.697 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:47.697 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:47.697 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:47.697 #define SPDK_CONFIG_IDXD 1 00:06:47.697 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:47.697 #undef SPDK_CONFIG_IPSEC_MB 00:06:47.697 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:47.697 #define SPDK_CONFIG_ISAL 1 00:06:47.697 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:47.697 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:47.697 #define SPDK_CONFIG_LIBDIR 00:06:47.697 #undef SPDK_CONFIG_LTO 00:06:47.697 #define SPDK_CONFIG_MAX_LCORES 128 00:06:47.697 #define SPDK_CONFIG_NVME_CUSE 1 00:06:47.697 #undef SPDK_CONFIG_OCF 00:06:47.697 #define SPDK_CONFIG_OCF_PATH 00:06:47.697 #define SPDK_CONFIG_OPENSSL_PATH 00:06:47.697 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:47.697 #define SPDK_CONFIG_PGO_DIR 00:06:47.697 #undef SPDK_CONFIG_PGO_USE 00:06:47.697 #define SPDK_CONFIG_PREFIX /usr/local 00:06:47.697 #undef SPDK_CONFIG_RAID5F 00:06:47.697 #undef SPDK_CONFIG_RBD 00:06:47.697 #define SPDK_CONFIG_RDMA 1 00:06:47.697 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:47.697 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:47.697 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:47.697 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:47.697 #define SPDK_CONFIG_SHARED 1 00:06:47.697 #undef SPDK_CONFIG_SMA 00:06:47.697 #define SPDK_CONFIG_TESTS 1 00:06:47.697 #undef SPDK_CONFIG_TSAN 00:06:47.697 #define SPDK_CONFIG_UBLK 1 00:06:47.697 #define SPDK_CONFIG_UBSAN 1 00:06:47.697 #undef SPDK_CONFIG_UNIT_TESTS 00:06:47.697 #undef SPDK_CONFIG_URING 00:06:47.697 #define SPDK_CONFIG_URING_PATH 00:06:47.697 #undef SPDK_CONFIG_URING_ZNS 00:06:47.697 #undef SPDK_CONFIG_USDT 00:06:47.697 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:47.697 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:47.697 #define SPDK_CONFIG_VFIO_USER 1 00:06:47.697 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:47.697 #define SPDK_CONFIG_VHOST 1 00:06:47.697 #define SPDK_CONFIG_VIRTIO 1 00:06:47.697 #undef SPDK_CONFIG_VTUNE 00:06:47.697 #define SPDK_CONFIG_VTUNE_DIR 00:06:47.697 #define SPDK_CONFIG_WERROR 1 00:06:47.697 #define SPDK_CONFIG_WPDK_DIR 00:06:47.697 #undef SPDK_CONFIG_XNVME 00:06:47.697 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.697 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:47.698 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3538415 ]] 00:06:47.699 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3538415 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.RkFD9L 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RkFD9L/tests/target /tmp/spdk.RkFD9L 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189554683904 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6419615744 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986297856 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=851968 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:47.986 * Looking for test storage... 00:06:47.986 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189554683904 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8634208256 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.987 21:44:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.987 21:44:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:53.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:53.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:53.269 Found net devices under 0000:86:00.0: cvl_0_0 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:53.269 Found net devices under 0000:86:00.1: cvl_0_1 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.269 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:06:53.270 00:06:53.270 --- 10.0.0.2 ping statistics --- 00:06:53.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.270 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:06:53.270 00:06:53.270 --- 10.0.0.1 ping statistics --- 00:06:53.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.270 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 ************************************ 00:06:53.270 START TEST nvmf_filesystem_no_in_capsule 00:06:53.270 ************************************ 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3541434 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3541434 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3541434 ']' 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.270 21:44:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 [2024-07-15 21:44:46.916999] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:06:53.270 [2024-07-15 21:44:46.917044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.270 [2024-07-15 21:44:46.977830] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.270 [2024-07-15 21:44:47.054087] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.270 [2024-07-15 21:44:47.054127] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.270 [2024-07-15 21:44:47.054134] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.270 [2024-07-15 21:44:47.054139] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.270 [2024-07-15 21:44:47.054144] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.270 [2024-07-15 21:44:47.054210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.270 [2024-07-15 21:44:47.054312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.270 [2024-07-15 21:44:47.054334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.270 [2024-07-15 21:44:47.054335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.529 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.529 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:53.529 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:53.529 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.529 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.530 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.530 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:53.530 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:53.530 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.530 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 [2024-07-15 21:44:47.771141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 [2024-07-15 21:44:47.916089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.789 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:53.789 { 00:06:53.789 "name": "Malloc1", 00:06:53.789 "aliases": [ 00:06:53.789 "9d09ed9a-bd03-4e2f-894e-9878d9017ad3" 00:06:53.789 ], 00:06:53.789 "product_name": "Malloc disk", 00:06:53.789 "block_size": 512, 00:06:53.789 "num_blocks": 1048576, 00:06:53.789 "uuid": "9d09ed9a-bd03-4e2f-894e-9878d9017ad3", 00:06:53.789 "assigned_rate_limits": { 00:06:53.789 "rw_ios_per_sec": 0, 00:06:53.789 "rw_mbytes_per_sec": 0, 00:06:53.789 "r_mbytes_per_sec": 0, 00:06:53.789 "w_mbytes_per_sec": 0 00:06:53.789 }, 00:06:53.789 "claimed": true, 00:06:53.789 "claim_type": "exclusive_write", 00:06:53.789 "zoned": false, 00:06:53.789 "supported_io_types": { 00:06:53.789 "read": true, 00:06:53.789 "write": true, 00:06:53.789 "unmap": true, 00:06:53.789 "flush": true, 00:06:53.789 "reset": true, 00:06:53.789 "nvme_admin": false, 00:06:53.789 "nvme_io": false, 00:06:53.789 "nvme_io_md": false, 00:06:53.789 "write_zeroes": true, 00:06:53.789 "zcopy": true, 00:06:53.789 "get_zone_info": false, 00:06:53.789 "zone_management": false, 00:06:53.789 "zone_append": false, 00:06:53.789 "compare": false, 00:06:53.789 "compare_and_write": false, 00:06:53.789 "abort": true, 00:06:53.789 "seek_hole": false, 00:06:53.789 "seek_data": false, 00:06:53.789 "copy": true, 00:06:53.789 "nvme_iov_md": false 00:06:53.789 }, 00:06:53.789 "memory_domains": [ 00:06:53.789 { 00:06:53.789 "dma_device_id": "system", 00:06:53.789 "dma_device_type": 1 00:06:53.789 }, 00:06:53.789 { 00:06:53.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.789 "dma_device_type": 2 00:06:53.789 } 00:06:53.790 ], 00:06:53.790 "driver_specific": {} 00:06:53.790 } 00:06:53.790 ]' 00:06:53.790 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:53.790 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:53.790 21:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:53.790 21:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:53.790 21:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:53.790 21:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:54.049 21:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:54.049 21:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.987 21:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.987 21:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:54.987 21:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.987 21:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:54.987 21:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:56.895 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:57.155 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:57.155 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:57.155 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:57.155 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:57.155 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:57.156 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:57.416 21:44:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:57.984 21:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:58.921 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:58.921 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:58.921 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.921 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.921 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.188 ************************************ 00:06:59.189 START TEST filesystem_ext4 00:06:59.189 ************************************ 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:59.189 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:59.189 mke2fs 1.46.5 (30-Dec-2021) 00:06:59.189 Discarding device blocks: 0/522240 done 00:06:59.189 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:59.189 Filesystem UUID: bceb9071-db36-4147-a806-152dc265c99a 00:06:59.189 Superblock backups stored on blocks: 00:06:59.189 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:59.189 00:06:59.189 Allocating group tables: 0/64 done 00:06:59.189 Writing inode tables: 0/64 done 00:06:59.760 Creating journal (8192 blocks): done 00:06:59.760 Writing superblocks and filesystem accounting information: 0/64 done 00:06:59.760 00:06:59.760 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:59.760 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.760 21:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3541434 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.018 00:07:00.018 real 0m0.878s 00:07:00.018 user 0m0.023s 00:07:00.018 sys 0m0.066s 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:00.018 ************************************ 00:07:00.018 END TEST filesystem_ext4 00:07:00.018 ************************************ 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.018 ************************************ 00:07:00.018 START TEST filesystem_btrfs 00:07:00.018 ************************************ 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:00.018 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:00.277 btrfs-progs v6.6.2 00:07:00.277 See https://btrfs.readthedocs.io for more information. 00:07:00.277 00:07:00.277 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:00.277 NOTE: several default settings have changed in version 5.15, please make sure 00:07:00.277 this does not affect your deployments: 00:07:00.277 - DUP for metadata (-m dup) 00:07:00.277 - enabled no-holes (-O no-holes) 00:07:00.277 - enabled free-space-tree (-R free-space-tree) 00:07:00.277 00:07:00.277 Label: (null) 00:07:00.277 UUID: 0616e847-8b3d-48f4-95d0-73d220941b44 00:07:00.277 Node size: 16384 00:07:00.277 Sector size: 4096 00:07:00.277 Filesystem size: 510.00MiB 00:07:00.277 Block group profiles: 00:07:00.277 Data: single 8.00MiB 00:07:00.277 Metadata: DUP 32.00MiB 00:07:00.277 System: DUP 8.00MiB 00:07:00.277 SSD detected: yes 00:07:00.277 Zoned device: no 00:07:00.277 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:00.277 Runtime features: free-space-tree 00:07:00.277 Checksum: crc32c 00:07:00.277 Number of devices: 1 00:07:00.277 Devices: 00:07:00.277 ID SIZE PATH 00:07:00.277 1 510.00MiB /dev/nvme0n1p1 00:07:00.277 00:07:00.277 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:00.277 21:44:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3541434 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.653 00:07:01.653 real 0m1.436s 00:07:01.653 user 0m0.037s 00:07:01.653 sys 0m0.111s 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:01.653 ************************************ 00:07:01.653 END TEST filesystem_btrfs 00:07:01.653 ************************************ 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:01.653 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.654 ************************************ 00:07:01.654 START TEST filesystem_xfs 00:07:01.654 ************************************ 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:01.654 21:44:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:01.654 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:01.654 = sectsz=512 attr=2, projid32bit=1 00:07:01.654 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:01.654 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:01.654 data = bsize=4096 blocks=130560, imaxpct=25 00:07:01.654 = sunit=0 swidth=0 blks 00:07:01.654 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:01.654 log =internal log bsize=4096 blocks=16384, version=2 00:07:01.654 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:01.654 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:02.590 Discarding blocks...Done. 00:07:02.590 21:44:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:02.590 21:44:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3541434 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:05.125 21:44:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:05.125 00:07:05.125 real 0m3.364s 00:07:05.125 user 0m0.016s 00:07:05.125 sys 0m0.078s 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 ************************************ 00:07:05.125 END TEST filesystem_xfs 00:07:05.125 ************************************ 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:05.125 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3541434 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3541434 ']' 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3541434 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3541434 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3541434' 00:07:05.384 killing process with pid 3541434 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3541434 00:07:05.384 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3541434 00:07:05.643 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:05.643 00:07:05.643 real 0m13.006s 00:07:05.643 user 0m51.084s 00:07:05.643 sys 0m1.249s 00:07:05.643 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.643 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.643 ************************************ 00:07:05.643 END TEST nvmf_filesystem_no_in_capsule 00:07:05.643 ************************************ 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.902 ************************************ 00:07:05.902 START TEST nvmf_filesystem_in_capsule 00:07:05.902 ************************************ 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3543734 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3543734 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3543734 ']' 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.902 21:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.902 [2024-07-15 21:44:59.992141] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:05.902 [2024-07-15 21:44:59.992186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.902 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.902 [2024-07-15 21:45:00.054244] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.902 [2024-07-15 21:45:00.139255] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.902 [2024-07-15 21:45:00.139292] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.902 [2024-07-15 21:45:00.139300] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.902 [2024-07-15 21:45:00.139306] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.902 [2024-07-15 21:45:00.139310] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.902 [2024-07-15 21:45:00.139351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.902 [2024-07-15 21:45:00.139448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.902 [2024-07-15 21:45:00.139533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.902 [2024-07-15 21:45:00.139534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.839 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 [2024-07-15 21:45:00.857297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 Malloc1 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 [2024-07-15 21:45:01.000913] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:06.840 { 00:07:06.840 "name": "Malloc1", 00:07:06.840 "aliases": [ 00:07:06.840 "4d707a30-8df0-4b25-9b0e-070621e7d61e" 00:07:06.840 ], 00:07:06.840 "product_name": "Malloc disk", 00:07:06.840 "block_size": 512, 00:07:06.840 "num_blocks": 1048576, 00:07:06.840 "uuid": "4d707a30-8df0-4b25-9b0e-070621e7d61e", 00:07:06.840 "assigned_rate_limits": { 00:07:06.840 "rw_ios_per_sec": 0, 00:07:06.840 "rw_mbytes_per_sec": 0, 00:07:06.840 "r_mbytes_per_sec": 0, 00:07:06.840 "w_mbytes_per_sec": 0 00:07:06.840 }, 00:07:06.840 "claimed": true, 00:07:06.840 "claim_type": "exclusive_write", 00:07:06.840 "zoned": false, 00:07:06.840 "supported_io_types": { 00:07:06.840 "read": true, 00:07:06.840 "write": true, 00:07:06.840 "unmap": true, 00:07:06.840 "flush": true, 00:07:06.840 "reset": true, 00:07:06.840 "nvme_admin": false, 00:07:06.840 "nvme_io": false, 00:07:06.840 "nvme_io_md": false, 00:07:06.840 "write_zeroes": true, 00:07:06.840 "zcopy": true, 00:07:06.840 "get_zone_info": false, 00:07:06.840 "zone_management": false, 00:07:06.840 "zone_append": false, 00:07:06.840 "compare": false, 00:07:06.840 "compare_and_write": false, 00:07:06.840 "abort": true, 00:07:06.840 "seek_hole": false, 00:07:06.840 "seek_data": false, 00:07:06.840 "copy": true, 00:07:06.840 "nvme_iov_md": false 00:07:06.840 }, 00:07:06.840 "memory_domains": [ 00:07:06.840 { 00:07:06.840 "dma_device_id": "system", 00:07:06.840 "dma_device_type": 1 00:07:06.840 }, 00:07:06.840 { 00:07:06.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.840 "dma_device_type": 2 00:07:06.840 } 00:07:06.840 ], 00:07:06.840 "driver_specific": {} 00:07:06.840 } 00:07:06.840 ]' 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:06.840 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:07.099 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:07.099 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:07.099 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:07.099 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:07.099 21:45:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.076 21:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.076 21:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.076 21:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.076 21:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.076 21:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:10.609 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:10.869 21:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:11.840 21:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:11.840 21:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.840 21:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.840 21:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.840 21:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.840 ************************************ 00:07:11.840 START TEST filesystem_in_capsule_ext4 00:07:11.840 ************************************ 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:11.840 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.840 mke2fs 1.46.5 (30-Dec-2021) 00:07:12.098 Discarding device blocks: 0/522240 done 00:07:12.098 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:12.098 Filesystem UUID: bce6336f-ad93-492a-bf64-e21b7f40d9e4 00:07:12.098 Superblock backups stored on blocks: 00:07:12.098 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:12.098 00:07:12.098 Allocating group tables: 0/64 done 00:07:12.098 Writing inode tables: 0/64 done 00:07:12.098 Creating journal (8192 blocks): done 00:07:12.098 Writing superblocks and filesystem accounting information: 0/64 done 00:07:12.098 00:07:12.098 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:12.098 21:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3543734 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.041 00:07:13.041 real 0m1.166s 00:07:13.041 user 0m0.025s 00:07:13.041 sys 0m0.062s 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:13.041 ************************************ 00:07:13.041 END TEST filesystem_in_capsule_ext4 00:07:13.041 ************************************ 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.041 ************************************ 00:07:13.041 START TEST filesystem_in_capsule_btrfs 00:07:13.041 ************************************ 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:13.041 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:13.300 btrfs-progs v6.6.2 00:07:13.300 See https://btrfs.readthedocs.io for more information. 00:07:13.300 00:07:13.300 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:13.300 NOTE: several default settings have changed in version 5.15, please make sure 00:07:13.300 this does not affect your deployments: 00:07:13.300 - DUP for metadata (-m dup) 00:07:13.300 - enabled no-holes (-O no-holes) 00:07:13.301 - enabled free-space-tree (-R free-space-tree) 00:07:13.301 00:07:13.301 Label: (null) 00:07:13.301 UUID: ba67240d-60c9-4a4c-a244-405e673eda22 00:07:13.301 Node size: 16384 00:07:13.301 Sector size: 4096 00:07:13.301 Filesystem size: 510.00MiB 00:07:13.301 Block group profiles: 00:07:13.301 Data: single 8.00MiB 00:07:13.301 Metadata: DUP 32.00MiB 00:07:13.301 System: DUP 8.00MiB 00:07:13.301 SSD detected: yes 00:07:13.301 Zoned device: no 00:07:13.301 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:13.301 Runtime features: free-space-tree 00:07:13.301 Checksum: crc32c 00:07:13.301 Number of devices: 1 00:07:13.301 Devices: 00:07:13.301 ID SIZE PATH 00:07:13.301 1 510.00MiB /dev/nvme0n1p1 00:07:13.301 00:07:13.301 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:13.301 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:13.559 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3543734 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.818 00:07:13.818 real 0m0.589s 00:07:13.818 user 0m0.025s 00:07:13.818 sys 0m0.127s 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:13.818 ************************************ 00:07:13.818 END TEST filesystem_in_capsule_btrfs 00:07:13.818 ************************************ 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.818 ************************************ 00:07:13.818 START TEST filesystem_in_capsule_xfs 00:07:13.818 ************************************ 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:13.818 21:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.818 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.818 = sectsz=512 attr=2, projid32bit=1 00:07:13.818 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.818 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.818 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.818 = sunit=0 swidth=0 blks 00:07:13.818 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.818 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.818 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.818 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.752 Discarding blocks...Done. 00:07:14.752 21:45:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:14.752 21:45:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.283 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.283 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:17.283 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.283 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:17.283 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3543734 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.284 00:07:17.284 real 0m3.427s 00:07:17.284 user 0m0.030s 00:07:17.284 sys 0m0.067s 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 ************************************ 00:07:17.284 END TEST filesystem_in_capsule_xfs 00:07:17.284 ************************************ 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:17.284 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3543734 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3543734 ']' 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3543734 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3543734 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3543734' 00:07:17.542 killing process with pid 3543734 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3543734 00:07:17.542 21:45:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3543734 00:07:17.801 21:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.801 00:07:17.801 real 0m12.071s 00:07:17.801 user 0m47.360s 00:07:17.801 sys 0m1.205s 00:07:17.801 21:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.801 21:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.801 ************************************ 00:07:17.801 END TEST nvmf_filesystem_in_capsule 00:07:17.801 ************************************ 00:07:17.801 21:45:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:18.061 rmmod nvme_tcp 00:07:18.061 rmmod nvme_fabrics 00:07:18.061 rmmod nvme_keyring 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.061 21:45:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.967 21:45:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.967 00:07:19.967 real 0m32.430s 00:07:19.967 user 1m39.845s 00:07:19.967 sys 0m6.193s 00:07:19.967 21:45:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.967 21:45:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.967 ************************************ 00:07:19.967 END TEST nvmf_filesystem 00:07:19.967 ************************************ 00:07:19.967 21:45:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:19.967 21:45:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:19.967 21:45:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.967 21:45:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.967 21:45:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.228 ************************************ 00:07:20.228 START TEST nvmf_target_discovery 00:07:20.228 ************************************ 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:20.228 * Looking for test storage... 00:07:20.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.228 21:45:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.508 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:25.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:25.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:25.509 Found net devices under 0000:86:00.0: cvl_0_0 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:25.509 Found net devices under 0000:86:00.1: cvl_0_1 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:07:25.509 00:07:25.509 --- 10.0.0.2 ping statistics --- 00:07:25.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.509 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:07:25.509 00:07:25.509 --- 10.0.0.1 ping statistics --- 00:07:25.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.509 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3549981 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3549981 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3549981 ']' 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.509 21:45:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.768 [2024-07-15 21:45:19.791195] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:25.768 [2024-07-15 21:45:19.791262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.768 [2024-07-15 21:45:19.849398] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.768 [2024-07-15 21:45:19.932983] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.768 [2024-07-15 21:45:19.933018] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.768 [2024-07-15 21:45:19.933024] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.768 [2024-07-15 21:45:19.933031] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.768 [2024-07-15 21:45:19.933036] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.768 [2024-07-15 21:45:19.933097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.768 [2024-07-15 21:45:19.933203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.768 [2024-07-15 21:45:19.933311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.768 [2024-07-15 21:45:19.933313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 [2024-07-15 21:45:20.636080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 Null1 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 [2024-07-15 21:45:20.681618] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 Null2 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 Null3 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 Null4 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.705 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:26.705 00:07:26.705 Discovery Log Number of Records 6, Generation counter 6 00:07:26.705 =====Discovery Log Entry 0====== 00:07:26.705 trtype: tcp 00:07:26.705 adrfam: ipv4 00:07:26.705 subtype: current discovery subsystem 00:07:26.705 treq: not required 00:07:26.705 portid: 0 00:07:26.705 trsvcid: 4420 00:07:26.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:26.705 traddr: 10.0.0.2 00:07:26.705 eflags: explicit discovery connections, duplicate discovery information 00:07:26.706 sectype: none 00:07:26.706 =====Discovery Log Entry 1====== 00:07:26.706 trtype: tcp 00:07:26.706 adrfam: ipv4 00:07:26.706 subtype: nvme subsystem 00:07:26.706 treq: not required 00:07:26.706 portid: 0 00:07:26.706 trsvcid: 4420 00:07:26.706 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:26.706 traddr: 10.0.0.2 00:07:26.706 eflags: none 00:07:26.706 sectype: none 00:07:26.706 =====Discovery Log Entry 2====== 00:07:26.706 trtype: tcp 00:07:26.706 adrfam: ipv4 00:07:26.706 subtype: nvme subsystem 00:07:26.706 treq: not required 00:07:26.706 portid: 0 00:07:26.706 trsvcid: 4420 00:07:26.706 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:26.706 traddr: 10.0.0.2 00:07:26.706 eflags: none 00:07:26.706 sectype: none 00:07:26.706 =====Discovery Log Entry 3====== 00:07:26.706 trtype: tcp 00:07:26.706 adrfam: ipv4 00:07:26.706 subtype: nvme subsystem 00:07:26.706 treq: not required 00:07:26.706 portid: 0 00:07:26.706 trsvcid: 4420 00:07:26.706 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:26.706 traddr: 10.0.0.2 00:07:26.706 eflags: none 00:07:26.706 sectype: none 00:07:26.706 =====Discovery Log Entry 4====== 00:07:26.706 trtype: tcp 00:07:26.706 adrfam: ipv4 00:07:26.706 subtype: nvme subsystem 00:07:26.706 treq: not required 00:07:26.706 portid: 0 00:07:26.706 trsvcid: 4420 00:07:26.706 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:26.706 traddr: 10.0.0.2 00:07:26.706 eflags: none 00:07:26.706 sectype: none 00:07:26.706 =====Discovery Log Entry 5====== 00:07:26.706 trtype: tcp 00:07:26.706 adrfam: ipv4 00:07:26.706 subtype: discovery subsystem referral 00:07:26.706 treq: not required 00:07:26.706 portid: 0 00:07:26.706 trsvcid: 4430 00:07:26.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:26.706 traddr: 10.0.0.2 00:07:26.706 eflags: none 00:07:26.706 sectype: none 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:26.706 Perform nvmf subsystem discovery via RPC 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.706 [ 00:07:26.706 { 00:07:26.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:26.706 "subtype": "Discovery", 00:07:26.706 "listen_addresses": [ 00:07:26.706 { 00:07:26.706 "trtype": "TCP", 00:07:26.706 "adrfam": "IPv4", 00:07:26.706 "traddr": "10.0.0.2", 00:07:26.706 "trsvcid": "4420" 00:07:26.706 } 00:07:26.706 ], 00:07:26.706 "allow_any_host": true, 00:07:26.706 "hosts": [] 00:07:26.706 }, 00:07:26.706 { 00:07:26.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:26.706 "subtype": "NVMe", 00:07:26.706 "listen_addresses": [ 00:07:26.706 { 00:07:26.706 "trtype": "TCP", 00:07:26.706 "adrfam": "IPv4", 00:07:26.706 "traddr": "10.0.0.2", 00:07:26.706 "trsvcid": "4420" 00:07:26.706 } 00:07:26.706 ], 00:07:26.706 "allow_any_host": true, 00:07:26.706 "hosts": [], 00:07:26.706 "serial_number": "SPDK00000000000001", 00:07:26.706 "model_number": "SPDK bdev Controller", 00:07:26.706 "max_namespaces": 32, 00:07:26.706 "min_cntlid": 1, 00:07:26.706 "max_cntlid": 65519, 00:07:26.706 "namespaces": [ 00:07:26.706 { 00:07:26.706 "nsid": 1, 00:07:26.706 "bdev_name": "Null1", 00:07:26.706 "name": "Null1", 00:07:26.706 "nguid": "CBADADC1B7B14371952AE38BF612F626", 00:07:26.706 "uuid": "cbadadc1-b7b1-4371-952a-e38bf612f626" 00:07:26.706 } 00:07:26.706 ] 00:07:26.706 }, 00:07:26.706 { 00:07:26.706 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:26.706 "subtype": "NVMe", 00:07:26.706 "listen_addresses": [ 00:07:26.706 { 00:07:26.706 "trtype": "TCP", 00:07:26.706 "adrfam": "IPv4", 00:07:26.706 "traddr": "10.0.0.2", 00:07:26.706 "trsvcid": "4420" 00:07:26.706 } 00:07:26.706 ], 00:07:26.706 "allow_any_host": true, 00:07:26.706 "hosts": [], 00:07:26.706 "serial_number": "SPDK00000000000002", 00:07:26.706 "model_number": "SPDK bdev Controller", 00:07:26.706 "max_namespaces": 32, 00:07:26.706 "min_cntlid": 1, 00:07:26.706 "max_cntlid": 65519, 00:07:26.706 "namespaces": [ 00:07:26.706 { 00:07:26.706 "nsid": 1, 00:07:26.706 "bdev_name": "Null2", 00:07:26.706 "name": "Null2", 00:07:26.706 "nguid": "FC81A5E93C7B43EE93B120FAE8A11B0C", 00:07:26.706 "uuid": "fc81a5e9-3c7b-43ee-93b1-20fae8a11b0c" 00:07:26.706 } 00:07:26.706 ] 00:07:26.706 }, 00:07:26.706 { 00:07:26.706 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:26.706 "subtype": "NVMe", 00:07:26.706 "listen_addresses": [ 00:07:26.706 { 00:07:26.706 "trtype": "TCP", 00:07:26.706 "adrfam": "IPv4", 00:07:26.706 "traddr": "10.0.0.2", 00:07:26.706 "trsvcid": "4420" 00:07:26.706 } 00:07:26.706 ], 00:07:26.706 "allow_any_host": true, 00:07:26.706 "hosts": [], 00:07:26.706 "serial_number": "SPDK00000000000003", 00:07:26.706 "model_number": "SPDK bdev Controller", 00:07:26.706 "max_namespaces": 32, 00:07:26.706 "min_cntlid": 1, 00:07:26.706 "max_cntlid": 65519, 00:07:26.706 "namespaces": [ 00:07:26.706 { 00:07:26.706 "nsid": 1, 00:07:26.706 "bdev_name": "Null3", 00:07:26.706 "name": "Null3", 00:07:26.706 "nguid": "63E4EB6E036C44D1A5FC3FD8C556245E", 00:07:26.706 "uuid": "63e4eb6e-036c-44d1-a5fc-3fd8c556245e" 00:07:26.706 } 00:07:26.706 ] 00:07:26.706 }, 00:07:26.706 { 00:07:26.706 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:26.706 "subtype": "NVMe", 00:07:26.706 "listen_addresses": [ 00:07:26.706 { 00:07:26.706 "trtype": "TCP", 00:07:26.706 "adrfam": "IPv4", 00:07:26.706 "traddr": "10.0.0.2", 00:07:26.706 "trsvcid": "4420" 00:07:26.706 } 00:07:26.706 ], 00:07:26.706 "allow_any_host": true, 00:07:26.706 "hosts": [], 00:07:26.706 "serial_number": "SPDK00000000000004", 00:07:26.706 "model_number": "SPDK bdev Controller", 00:07:26.706 "max_namespaces": 32, 00:07:26.706 "min_cntlid": 1, 00:07:26.706 "max_cntlid": 65519, 00:07:26.706 "namespaces": [ 00:07:26.706 { 00:07:26.706 "nsid": 1, 00:07:26.706 "bdev_name": "Null4", 00:07:26.706 "name": "Null4", 00:07:26.706 "nguid": "5B4D7C12E846461EBB0F3D06E1D22066", 00:07:26.706 "uuid": "5b4d7c12-e846-461e-bb0f-3d06e1d22066" 00:07:26.706 } 00:07:26.706 ] 00:07:26.706 } 00:07:26.706 ] 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.706 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.966 21:45:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.966 rmmod nvme_tcp 00:07:26.966 rmmod nvme_fabrics 00:07:26.966 rmmod nvme_keyring 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3549981 ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3549981 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3549981 ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3549981 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3549981 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3549981' 00:07:26.966 killing process with pid 3549981 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3549981 00:07:26.966 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3549981 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.225 21:45:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.763 21:45:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.763 00:07:29.763 real 0m9.142s 00:07:29.763 user 0m7.234s 00:07:29.763 sys 0m4.363s 00:07:29.763 21:45:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.764 21:45:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.764 ************************************ 00:07:29.764 END TEST nvmf_target_discovery 00:07:29.764 ************************************ 00:07:29.764 21:45:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.764 21:45:23 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:29.764 21:45:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.764 21:45:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.764 21:45:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.764 ************************************ 00:07:29.764 START TEST nvmf_referrals 00:07:29.764 ************************************ 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:29.764 * Looking for test storage... 00:07:29.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.764 21:45:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.042 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.043 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.043 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.043 21:45:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:35.043 00:07:35.043 --- 10.0.0.2 ping statistics --- 00:07:35.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.043 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:35.043 00:07:35.043 --- 10.0.0.1 ping statistics --- 00:07:35.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.043 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3553622 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3553622 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3553622 ']' 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.043 21:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.043 [2024-07-15 21:45:29.245735] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:35.043 [2024-07-15 21:45:29.245780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.043 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.302 [2024-07-15 21:45:29.305102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.302 [2024-07-15 21:45:29.386548] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.302 [2024-07-15 21:45:29.386582] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.302 [2024-07-15 21:45:29.386589] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.302 [2024-07-15 21:45:29.386595] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.302 [2024-07-15 21:45:29.386604] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.302 [2024-07-15 21:45:29.386658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.302 [2024-07-15 21:45:29.386673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.302 [2024-07-15 21:45:29.386765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.302 [2024-07-15 21:45:29.386766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.871 [2024-07-15 21:45:30.103253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.871 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 [2024-07-15 21:45:30.116727] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.129 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.389 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.686 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.687 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.968 21:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:36.968 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:37.227 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.487 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.487 rmmod nvme_tcp 00:07:37.487 rmmod nvme_fabrics 00:07:37.487 rmmod nvme_keyring 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3553622 ']' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3553622 ']' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3553622' 00:07:37.747 killing process with pid 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3553622 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.747 21:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.283 21:45:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.283 00:07:40.283 real 0m10.575s 00:07:40.283 user 0m12.718s 00:07:40.283 sys 0m4.848s 00:07:40.283 21:45:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.283 21:45:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.283 ************************************ 00:07:40.283 END TEST nvmf_referrals 00:07:40.283 ************************************ 00:07:40.283 21:45:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:40.283 21:45:34 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:40.283 21:45:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.283 21:45:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.283 21:45:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.283 ************************************ 00:07:40.283 START TEST nvmf_connect_disconnect 00:07:40.283 ************************************ 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:40.283 * Looking for test storage... 00:07:40.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.283 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.284 21:45:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:45.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:45.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:45.557 Found net devices under 0000:86:00.0: cvl_0_0 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:45.557 Found net devices under 0000:86:00.1: cvl_0_1 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.557 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:07:45.558 00:07:45.558 --- 10.0.0.2 ping statistics --- 00:07:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.558 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:45.558 00:07:45.558 --- 10.0.0.1 ping statistics --- 00:07:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.558 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3557689 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3557689 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3557689 ']' 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.558 21:45:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:45.558 [2024-07-15 21:45:39.465171] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:07:45.558 [2024-07-15 21:45:39.465215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.558 [2024-07-15 21:45:39.523341] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.558 [2024-07-15 21:45:39.603606] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.558 [2024-07-15 21:45:39.603641] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.558 [2024-07-15 21:45:39.603648] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.558 [2024-07-15 21:45:39.603654] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.558 [2024-07-15 21:45:39.603659] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.558 [2024-07-15 21:45:39.603709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.558 [2024-07-15 21:45:39.603794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.558 [2024-07-15 21:45:39.603896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.558 [2024-07-15 21:45:39.603897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.127 [2024-07-15 21:45:40.327298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.127 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.386 [2024-07-15 21:45:40.379357] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:46.386 21:45:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:49.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.841 rmmod nvme_tcp 00:08:02.841 rmmod nvme_fabrics 00:08:02.841 rmmod nvme_keyring 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3557689 ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3557689 ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3557689' 00:08:02.841 killing process with pid 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3557689 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.841 21:45:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.749 21:45:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.749 00:08:04.749 real 0m24.764s 00:08:04.749 user 1m9.853s 00:08:04.749 sys 0m5.071s 00:08:04.749 21:45:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.749 21:45:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 ************************************ 00:08:04.749 END TEST nvmf_connect_disconnect 00:08:04.749 ************************************ 00:08:04.749 21:45:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.749 21:45:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:04.749 21:45:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.749 21:45:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.749 21:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 ************************************ 00:08:04.749 START TEST nvmf_multitarget 00:08:04.749 ************************************ 00:08:04.749 21:45:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:05.011 * Looking for test storage... 00:08:05.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.011 21:45:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.012 21:45:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.327 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.327 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.327 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.327 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:10.328 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:10.328 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:10.328 Found net devices under 0000:86:00.0: cvl_0_0 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:10.328 Found net devices under 0000:86:00.1: cvl_0_1 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:08:10.328 00:08:10.328 --- 10.0.0.2 ping statistics --- 00:08:10.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.328 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:08:10.328 00:08:10.328 --- 10.0.0.1 ping statistics --- 00:08:10.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.328 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3564102 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3564102 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3564102 ']' 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.328 21:46:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.328 [2024-07-15 21:46:04.447679] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:10.328 [2024-07-15 21:46:04.447720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.328 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.328 [2024-07-15 21:46:04.500803] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.586 [2024-07-15 21:46:04.581347] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.586 [2024-07-15 21:46:04.581383] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.586 [2024-07-15 21:46:04.581393] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.586 [2024-07-15 21:46:04.581399] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.586 [2024-07-15 21:46:04.581404] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.586 [2024-07-15 21:46:04.581446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.586 [2024-07-15 21:46:04.581544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.586 [2024-07-15 21:46:04.581626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.586 [2024-07-15 21:46:04.581628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.151 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:11.408 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:11.408 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:11.408 "nvmf_tgt_1" 00:08:11.408 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:11.408 "nvmf_tgt_2" 00:08:11.408 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.408 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:11.666 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:11.666 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:11.666 true 00:08:11.666 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:11.923 true 00:08:11.923 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.924 21:46:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.924 rmmod nvme_tcp 00:08:11.924 rmmod nvme_fabrics 00:08:11.924 rmmod nvme_keyring 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3564102 ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3564102 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3564102 ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3564102 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3564102 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3564102' 00:08:11.924 killing process with pid 3564102 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3564102 00:08:11.924 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3564102 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.182 21:46:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.719 21:46:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.719 00:08:14.719 real 0m9.483s 00:08:14.719 user 0m9.303s 00:08:14.719 sys 0m4.452s 00:08:14.719 21:46:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.719 21:46:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 ************************************ 00:08:14.719 END TEST nvmf_multitarget 00:08:14.719 ************************************ 00:08:14.719 21:46:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:14.719 21:46:08 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:14.719 21:46:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.719 21:46:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.719 21:46:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.719 ************************************ 00:08:14.719 START TEST nvmf_rpc 00:08:14.719 ************************************ 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:14.719 * Looking for test storage... 00:08:14.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.719 21:46:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.720 21:46:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.995 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.995 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.995 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.995 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.995 21:46:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:19.995 00:08:19.995 --- 10.0.0.2 ping statistics --- 00:08:19.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.995 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:19.995 00:08:19.995 --- 10.0.0.1 ping statistics --- 00:08:19.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.995 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3567881 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3567881 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3567881 ']' 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.995 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.995 [2024-07-15 21:46:14.191562] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:19.995 [2024-07-15 21:46:14.191603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.995 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.254 [2024-07-15 21:46:14.250577] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.254 [2024-07-15 21:46:14.324830] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.254 [2024-07-15 21:46:14.324871] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.254 [2024-07-15 21:46:14.324878] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.254 [2024-07-15 21:46:14.324884] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.254 [2024-07-15 21:46:14.324888] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.254 [2024-07-15 21:46:14.324993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.254 [2024-07-15 21:46:14.325104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.254 [2024-07-15 21:46:14.325117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.254 [2024-07-15 21:46:14.325119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.823 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.823 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:20.823 21:46:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.823 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.824 21:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:20.824 "tick_rate": 2300000000, 00:08:20.824 "poll_groups": [ 00:08:20.824 { 00:08:20.824 "name": "nvmf_tgt_poll_group_000", 00:08:20.824 "admin_qpairs": 0, 00:08:20.824 "io_qpairs": 0, 00:08:20.824 "current_admin_qpairs": 0, 00:08:20.824 "current_io_qpairs": 0, 00:08:20.824 "pending_bdev_io": 0, 00:08:20.824 "completed_nvme_io": 0, 00:08:20.824 "transports": [] 00:08:20.824 }, 00:08:20.824 { 00:08:20.824 "name": "nvmf_tgt_poll_group_001", 00:08:20.824 "admin_qpairs": 0, 00:08:20.824 "io_qpairs": 0, 00:08:20.824 "current_admin_qpairs": 0, 00:08:20.824 "current_io_qpairs": 0, 00:08:20.824 "pending_bdev_io": 0, 00:08:20.824 "completed_nvme_io": 0, 00:08:20.824 "transports": [] 00:08:20.824 }, 00:08:20.824 { 00:08:20.824 "name": "nvmf_tgt_poll_group_002", 00:08:20.824 "admin_qpairs": 0, 00:08:20.824 "io_qpairs": 0, 00:08:20.824 "current_admin_qpairs": 0, 00:08:20.824 "current_io_qpairs": 0, 00:08:20.824 "pending_bdev_io": 0, 00:08:20.824 "completed_nvme_io": 0, 00:08:20.824 "transports": [] 00:08:20.824 }, 00:08:20.824 { 00:08:20.824 "name": "nvmf_tgt_poll_group_003", 00:08:20.824 "admin_qpairs": 0, 00:08:20.824 "io_qpairs": 0, 00:08:20.824 "current_admin_qpairs": 0, 00:08:20.824 "current_io_qpairs": 0, 00:08:20.824 "pending_bdev_io": 0, 00:08:20.824 "completed_nvme_io": 0, 00:08:20.824 "transports": [] 00:08:20.824 } 00:08:20.824 ] 00:08:20.824 }' 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:20.824 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 [2024-07-15 21:46:15.142674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:21.083 "tick_rate": 2300000000, 00:08:21.083 "poll_groups": [ 00:08:21.083 { 00:08:21.083 "name": "nvmf_tgt_poll_group_000", 00:08:21.083 "admin_qpairs": 0, 00:08:21.083 "io_qpairs": 0, 00:08:21.083 "current_admin_qpairs": 0, 00:08:21.083 "current_io_qpairs": 0, 00:08:21.083 "pending_bdev_io": 0, 00:08:21.083 "completed_nvme_io": 0, 00:08:21.083 "transports": [ 00:08:21.083 { 00:08:21.083 "trtype": "TCP" 00:08:21.083 } 00:08:21.083 ] 00:08:21.083 }, 00:08:21.083 { 00:08:21.083 "name": "nvmf_tgt_poll_group_001", 00:08:21.083 "admin_qpairs": 0, 00:08:21.083 "io_qpairs": 0, 00:08:21.083 "current_admin_qpairs": 0, 00:08:21.083 "current_io_qpairs": 0, 00:08:21.083 "pending_bdev_io": 0, 00:08:21.083 "completed_nvme_io": 0, 00:08:21.083 "transports": [ 00:08:21.083 { 00:08:21.083 "trtype": "TCP" 00:08:21.083 } 00:08:21.083 ] 00:08:21.083 }, 00:08:21.083 { 00:08:21.083 "name": "nvmf_tgt_poll_group_002", 00:08:21.083 "admin_qpairs": 0, 00:08:21.083 "io_qpairs": 0, 00:08:21.083 "current_admin_qpairs": 0, 00:08:21.083 "current_io_qpairs": 0, 00:08:21.083 "pending_bdev_io": 0, 00:08:21.083 "completed_nvme_io": 0, 00:08:21.083 "transports": [ 00:08:21.083 { 00:08:21.083 "trtype": "TCP" 00:08:21.083 } 00:08:21.083 ] 00:08:21.083 }, 00:08:21.083 { 00:08:21.083 "name": "nvmf_tgt_poll_group_003", 00:08:21.083 "admin_qpairs": 0, 00:08:21.083 "io_qpairs": 0, 00:08:21.083 "current_admin_qpairs": 0, 00:08:21.083 "current_io_qpairs": 0, 00:08:21.083 "pending_bdev_io": 0, 00:08:21.083 "completed_nvme_io": 0, 00:08:21.083 "transports": [ 00:08:21.083 { 00:08:21.083 "trtype": "TCP" 00:08:21.083 } 00:08:21.083 ] 00:08:21.083 } 00:08:21.083 ] 00:08:21.083 }' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 Malloc1 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.083 [2024-07-15 21:46:15.318912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.083 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:21.342 [2024-07-15 21:46:15.343526] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:21.342 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:21.342 could not add new controller: failed to write to nvme-fabrics device 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.342 21:46:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.718 21:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.718 21:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:22.718 21:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.718 21:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:22.718 21:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.621 [2024-07-15 21:46:18.710997] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:24.621 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:24.621 could not add new controller: failed to write to nvme-fabrics device 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.621 21:46:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.999 21:46:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.999 21:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:25.999 21:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.999 21:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:25.999 21:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:27.904 21:46:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 [2024-07-15 21:46:22.114103] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 21:46:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.283 21:46:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.283 21:46:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.283 21:46:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.283 21:46:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:29.283 21:46:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 [2024-07-15 21:46:25.385342] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.186 21:46:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.608 21:46:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.608 21:46:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:32.608 21:46:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.608 21:46:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:32.608 21:46:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 [2024-07-15 21:46:28.693989] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.514 21:46:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.893 21:46:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.893 21:46:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.893 21:46:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.893 21:46:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.893 21:46:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.800 21:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.800 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.800 [2024-07-15 21:46:32.037129] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.059 21:46:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.995 21:46:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.995 21:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:38.995 21:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.995 21:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:38.995 21:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 [2024-07-15 21:46:35.372853] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.540 21:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.477 21:46:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.477 21:46:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.477 21:46:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.477 21:46:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:42.477 21:46:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.392 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:44.652 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 [2024-07-15 21:46:38.673681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 [2024-07-15 21:46:38.721791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 [2024-07-15 21:46:38.773956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 [2024-07-15 21:46:38.822113] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.653 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 [2024-07-15 21:46:38.870308] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.654 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:44.914 "tick_rate": 2300000000, 00:08:44.914 "poll_groups": [ 00:08:44.914 { 00:08:44.914 "name": "nvmf_tgt_poll_group_000", 00:08:44.914 "admin_qpairs": 2, 00:08:44.914 "io_qpairs": 168, 00:08:44.914 "current_admin_qpairs": 0, 00:08:44.914 "current_io_qpairs": 0, 00:08:44.914 "pending_bdev_io": 0, 00:08:44.914 "completed_nvme_io": 268, 00:08:44.914 "transports": [ 00:08:44.914 { 00:08:44.914 "trtype": "TCP" 00:08:44.914 } 00:08:44.914 ] 00:08:44.914 }, 00:08:44.914 { 00:08:44.914 "name": "nvmf_tgt_poll_group_001", 00:08:44.914 "admin_qpairs": 2, 00:08:44.914 "io_qpairs": 168, 00:08:44.914 "current_admin_qpairs": 0, 00:08:44.914 "current_io_qpairs": 0, 00:08:44.914 "pending_bdev_io": 0, 00:08:44.914 "completed_nvme_io": 320, 00:08:44.914 "transports": [ 00:08:44.914 { 00:08:44.914 "trtype": "TCP" 00:08:44.914 } 00:08:44.914 ] 00:08:44.915 }, 00:08:44.915 { 00:08:44.915 "name": "nvmf_tgt_poll_group_002", 00:08:44.915 "admin_qpairs": 1, 00:08:44.915 "io_qpairs": 168, 00:08:44.915 "current_admin_qpairs": 0, 00:08:44.915 "current_io_qpairs": 0, 00:08:44.915 "pending_bdev_io": 0, 00:08:44.915 "completed_nvme_io": 168, 00:08:44.915 "transports": [ 00:08:44.915 { 00:08:44.915 "trtype": "TCP" 00:08:44.915 } 00:08:44.915 ] 00:08:44.915 }, 00:08:44.915 { 00:08:44.915 "name": "nvmf_tgt_poll_group_003", 00:08:44.915 "admin_qpairs": 2, 00:08:44.915 "io_qpairs": 168, 00:08:44.915 "current_admin_qpairs": 0, 00:08:44.915 "current_io_qpairs": 0, 00:08:44.915 "pending_bdev_io": 0, 00:08:44.915 "completed_nvme_io": 266, 00:08:44.915 "transports": [ 00:08:44.915 { 00:08:44.915 "trtype": "TCP" 00:08:44.915 } 00:08:44.915 ] 00:08:44.915 } 00:08:44.915 ] 00:08:44.915 }' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:44.915 21:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.915 rmmod nvme_tcp 00:08:44.915 rmmod nvme_fabrics 00:08:44.915 rmmod nvme_keyring 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3567881 ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3567881 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3567881 ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3567881 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3567881 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3567881' 00:08:44.915 killing process with pid 3567881 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3567881 00:08:44.915 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3567881 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.175 21:46:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.714 21:46:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.714 00:08:47.714 real 0m32.950s 00:08:47.714 user 1m41.394s 00:08:47.714 sys 0m5.941s 00:08:47.714 21:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.714 21:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.714 ************************************ 00:08:47.714 END TEST nvmf_rpc 00:08:47.714 ************************************ 00:08:47.714 21:46:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.714 21:46:41 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:47.714 21:46:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.714 21:46:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.714 21:46:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.714 ************************************ 00:08:47.714 START TEST nvmf_invalid 00:08:47.714 ************************************ 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:47.714 * Looking for test storage... 00:08:47.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.714 21:46:41 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.715 21:46:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.989 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:52.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:52.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:52.990 Found net devices under 0000:86:00.0: cvl_0_0 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:52.990 Found net devices under 0000:86:00.1: cvl_0_1 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:08:52.990 00:08:52.990 --- 10.0.0.2 ping statistics --- 00:08:52.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.990 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:52.990 00:08:52.990 --- 10.0.0.1 ping statistics --- 00:08:52.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.990 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.990 21:46:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3575560 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3575560 00:08:52.990 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3575560 ']' 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.991 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:52.991 [2024-07-15 21:46:47.064797] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:08:52.991 [2024-07-15 21:46:47.064842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.991 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.991 [2024-07-15 21:46:47.124446] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.991 [2024-07-15 21:46:47.205023] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.991 [2024-07-15 21:46:47.205059] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.991 [2024-07-15 21:46:47.205066] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.991 [2024-07-15 21:46:47.205072] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.991 [2024-07-15 21:46:47.205078] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.991 [2024-07-15 21:46:47.205112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.991 [2024-07-15 21:46:47.205133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.991 [2024-07-15 21:46:47.205221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.991 [2024-07-15 21:46:47.205222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:53.930 21:46:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14405 00:08:53.930 [2024-07-15 21:46:48.056607] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:53.930 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:53.930 { 00:08:53.930 "nqn": "nqn.2016-06.io.spdk:cnode14405", 00:08:53.930 "tgt_name": "foobar", 00:08:53.930 "method": "nvmf_create_subsystem", 00:08:53.930 "req_id": 1 00:08:53.930 } 00:08:53.930 Got JSON-RPC error response 00:08:53.930 response: 00:08:53.930 { 00:08:53.930 "code": -32603, 00:08:53.930 "message": "Unable to find target foobar" 00:08:53.930 }' 00:08:53.930 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:53.930 { 00:08:53.930 "nqn": "nqn.2016-06.io.spdk:cnode14405", 00:08:53.930 "tgt_name": "foobar", 00:08:53.930 "method": "nvmf_create_subsystem", 00:08:53.930 "req_id": 1 00:08:53.930 } 00:08:53.930 Got JSON-RPC error response 00:08:53.930 response: 00:08:53.930 { 00:08:53.930 "code": -32603, 00:08:53.930 "message": "Unable to find target foobar" 00:08:53.930 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:53.930 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:53.930 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13334 00:08:54.190 [2024-07-15 21:46:48.245262] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13334: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:54.190 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:54.190 { 00:08:54.190 "nqn": "nqn.2016-06.io.spdk:cnode13334", 00:08:54.190 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:54.190 "method": "nvmf_create_subsystem", 00:08:54.190 "req_id": 1 00:08:54.190 } 00:08:54.190 Got JSON-RPC error response 00:08:54.190 response: 00:08:54.190 { 00:08:54.190 "code": -32602, 00:08:54.190 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:54.190 }' 00:08:54.190 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:54.190 { 00:08:54.190 "nqn": "nqn.2016-06.io.spdk:cnode13334", 00:08:54.190 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:54.190 "method": "nvmf_create_subsystem", 00:08:54.190 "req_id": 1 00:08:54.190 } 00:08:54.191 Got JSON-RPC error response 00:08:54.191 response: 00:08:54.191 { 00:08:54.191 "code": -32602, 00:08:54.191 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:54.191 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:54.191 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:54.191 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4414 00:08:54.452 [2024-07-15 21:46:48.441888] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4414: invalid model number 'SPDK_Controller' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:54.452 { 00:08:54.452 "nqn": "nqn.2016-06.io.spdk:cnode4414", 00:08:54.452 "model_number": "SPDK_Controller\u001f", 00:08:54.452 "method": "nvmf_create_subsystem", 00:08:54.452 "req_id": 1 00:08:54.452 } 00:08:54.452 Got JSON-RPC error response 00:08:54.452 response: 00:08:54.452 { 00:08:54.452 "code": -32602, 00:08:54.452 "message": "Invalid MN SPDK_Controller\u001f" 00:08:54.452 }' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:54.452 { 00:08:54.452 "nqn": "nqn.2016-06.io.spdk:cnode4414", 00:08:54.452 "model_number": "SPDK_Controller\u001f", 00:08:54.452 "method": "nvmf_create_subsystem", 00:08:54.452 "req_id": 1 00:08:54.452 } 00:08:54.452 Got JSON-RPC error response 00:08:54.452 response: 00:08:54.452 { 00:08:54.452 "code": -32602, 00:08:54.452 "message": "Invalid MN SPDK_Controller\u001f" 00:08:54.452 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:54.452 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rP~|xc0t)hW{4Rs).+Qr7' 00:08:54.453 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rP~|xc0t)hW{4Rs).+Qr7' nqn.2016-06.io.spdk:cnode8402 00:08:54.715 [2024-07-15 21:46:48.771026] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8402: invalid serial number 'rP~|xc0t)hW{4Rs).+Qr7' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:54.715 { 00:08:54.715 "nqn": "nqn.2016-06.io.spdk:cnode8402", 00:08:54.715 "serial_number": "rP~|xc0t)hW{4Rs).+Qr7", 00:08:54.715 "method": "nvmf_create_subsystem", 00:08:54.715 "req_id": 1 00:08:54.715 } 00:08:54.715 Got JSON-RPC error response 00:08:54.715 response: 00:08:54.715 { 00:08:54.715 "code": -32602, 00:08:54.715 "message": "Invalid SN rP~|xc0t)hW{4Rs).+Qr7" 00:08:54.715 }' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:54.715 { 00:08:54.715 "nqn": "nqn.2016-06.io.spdk:cnode8402", 00:08:54.715 "serial_number": "rP~|xc0t)hW{4Rs).+Qr7", 00:08:54.715 "method": "nvmf_create_subsystem", 00:08:54.715 "req_id": 1 00:08:54.715 } 00:08:54.715 Got JSON-RPC error response 00:08:54.715 response: 00:08:54.715 { 00:08:54.715 "code": -32602, 00:08:54.715 "message": "Invalid SN rP~|xc0t)hW{4Rs).+Qr7" 00:08:54.715 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:54.715 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.716 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.975 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:54.975 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:54.976 21:46:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n vDqW6Kh&{3/+!smZ#B1~z'\''}rr:Tw5'\'' AZG\@I,' 00:08:54.976 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'n vDqW6Kh&{3/+!smZ#B1~z'\''}rr:Tw5'\'' AZG\@I,' nqn.2016-06.io.spdk:cnode24324 00:08:54.976 [2024-07-15 21:46:49.212536] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24324: invalid model number 'n vDqW6Kh&{3/+!smZ#B1~z'}rr:Tw5' AZG\@I,' 00:08:55.273 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:55.273 { 00:08:55.273 "nqn": "nqn.2016-06.io.spdk:cnode24324", 00:08:55.273 "model_number": "n vDqW6Kh&{3/+!smZ#B1~z'\''}rr:Tw5'\'' A\u007fZG\\@I,", 00:08:55.273 "method": "nvmf_create_subsystem", 00:08:55.273 "req_id": 1 00:08:55.273 } 00:08:55.273 Got JSON-RPC error response 00:08:55.273 response: 00:08:55.273 { 00:08:55.273 "code": -32602, 00:08:55.273 "message": "Invalid MN n vDqW6Kh&{3/+!smZ#B1~z'\''}rr:Tw5'\'' A\u007fZG\\@I," 00:08:55.273 }' 00:08:55.273 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:55.273 { 00:08:55.273 "nqn": "nqn.2016-06.io.spdk:cnode24324", 00:08:55.273 "model_number": "n vDqW6Kh&{3/+!smZ#B1~z'}rr:Tw5' A\u007fZG\\@I,", 00:08:55.273 "method": "nvmf_create_subsystem", 00:08:55.273 "req_id": 1 00:08:55.273 } 00:08:55.273 Got JSON-RPC error response 00:08:55.273 response: 00:08:55.273 { 00:08:55.273 "code": -32602, 00:08:55.273 "message": "Invalid MN n vDqW6Kh&{3/+!smZ#B1~z'}rr:Tw5' A\u007fZG\\@I," 00:08:55.273 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:55.273 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:55.273 [2024-07-15 21:46:49.393204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.273 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:55.532 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:55.532 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:55.532 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:55.532 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:55.532 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:55.791 [2024-07-15 21:46:49.778504] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:55.791 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:55.791 { 00:08:55.791 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:55.791 "listen_address": { 00:08:55.791 "trtype": "tcp", 00:08:55.791 "traddr": "", 00:08:55.791 "trsvcid": "4421" 00:08:55.791 }, 00:08:55.791 "method": "nvmf_subsystem_remove_listener", 00:08:55.791 "req_id": 1 00:08:55.791 } 00:08:55.791 Got JSON-RPC error response 00:08:55.791 response: 00:08:55.791 { 00:08:55.791 "code": -32602, 00:08:55.791 "message": "Invalid parameters" 00:08:55.791 }' 00:08:55.791 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:55.791 { 00:08:55.791 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:55.791 "listen_address": { 00:08:55.791 "trtype": "tcp", 00:08:55.791 "traddr": "", 00:08:55.791 "trsvcid": "4421" 00:08:55.791 }, 00:08:55.791 "method": "nvmf_subsystem_remove_listener", 00:08:55.791 "req_id": 1 00:08:55.791 } 00:08:55.791 Got JSON-RPC error response 00:08:55.791 response: 00:08:55.791 { 00:08:55.791 "code": -32602, 00:08:55.791 "message": "Invalid parameters" 00:08:55.791 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:55.791 21:46:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3349 -i 0 00:08:55.791 [2024-07-15 21:46:49.975127] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3349: invalid cntlid range [0-65519] 00:08:55.791 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:55.791 { 00:08:55.791 "nqn": "nqn.2016-06.io.spdk:cnode3349", 00:08:55.791 "min_cntlid": 0, 00:08:55.791 "method": "nvmf_create_subsystem", 00:08:55.791 "req_id": 1 00:08:55.791 } 00:08:55.791 Got JSON-RPC error response 00:08:55.791 response: 00:08:55.791 { 00:08:55.791 "code": -32602, 00:08:55.791 "message": "Invalid cntlid range [0-65519]" 00:08:55.791 }' 00:08:55.791 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:55.791 { 00:08:55.791 "nqn": "nqn.2016-06.io.spdk:cnode3349", 00:08:55.791 "min_cntlid": 0, 00:08:55.791 "method": "nvmf_create_subsystem", 00:08:55.791 "req_id": 1 00:08:55.791 } 00:08:55.791 Got JSON-RPC error response 00:08:55.791 response: 00:08:55.791 { 00:08:55.791 "code": -32602, 00:08:55.791 "message": "Invalid cntlid range [0-65519]" 00:08:55.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:55.791 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29365 -i 65520 00:08:56.049 [2024-07-15 21:46:50.167781] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29365: invalid cntlid range [65520-65519] 00:08:56.049 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:56.049 { 00:08:56.049 "nqn": "nqn.2016-06.io.spdk:cnode29365", 00:08:56.049 "min_cntlid": 65520, 00:08:56.049 "method": "nvmf_create_subsystem", 00:08:56.049 "req_id": 1 00:08:56.049 } 00:08:56.049 Got JSON-RPC error response 00:08:56.049 response: 00:08:56.049 { 00:08:56.049 "code": -32602, 00:08:56.049 "message": "Invalid cntlid range [65520-65519]" 00:08:56.049 }' 00:08:56.049 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:56.049 { 00:08:56.049 "nqn": "nqn.2016-06.io.spdk:cnode29365", 00:08:56.049 "min_cntlid": 65520, 00:08:56.049 "method": "nvmf_create_subsystem", 00:08:56.049 "req_id": 1 00:08:56.049 } 00:08:56.049 Got JSON-RPC error response 00:08:56.049 response: 00:08:56.049 { 00:08:56.049 "code": -32602, 00:08:56.049 "message": "Invalid cntlid range [65520-65519]" 00:08:56.049 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:56.049 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29197 -I 0 00:08:56.307 [2024-07-15 21:46:50.356413] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29197: invalid cntlid range [1-0] 00:08:56.307 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:56.307 { 00:08:56.307 "nqn": "nqn.2016-06.io.spdk:cnode29197", 00:08:56.307 "max_cntlid": 0, 00:08:56.307 "method": "nvmf_create_subsystem", 00:08:56.307 "req_id": 1 00:08:56.307 } 00:08:56.307 Got JSON-RPC error response 00:08:56.307 response: 00:08:56.307 { 00:08:56.307 "code": -32602, 00:08:56.307 "message": "Invalid cntlid range [1-0]" 00:08:56.307 }' 00:08:56.307 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:56.307 { 00:08:56.307 "nqn": "nqn.2016-06.io.spdk:cnode29197", 00:08:56.307 "max_cntlid": 0, 00:08:56.307 "method": "nvmf_create_subsystem", 00:08:56.307 "req_id": 1 00:08:56.307 } 00:08:56.307 Got JSON-RPC error response 00:08:56.307 response: 00:08:56.307 { 00:08:56.307 "code": -32602, 00:08:56.307 "message": "Invalid cntlid range [1-0]" 00:08:56.307 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:56.307 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6779 -I 65520 00:08:56.307 [2024-07-15 21:46:50.540997] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6779: invalid cntlid range [1-65520] 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:56.566 { 00:08:56.566 "nqn": "nqn.2016-06.io.spdk:cnode6779", 00:08:56.566 "max_cntlid": 65520, 00:08:56.566 "method": "nvmf_create_subsystem", 00:08:56.566 "req_id": 1 00:08:56.566 } 00:08:56.566 Got JSON-RPC error response 00:08:56.566 response: 00:08:56.566 { 00:08:56.566 "code": -32602, 00:08:56.566 "message": "Invalid cntlid range [1-65520]" 00:08:56.566 }' 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:56.566 { 00:08:56.566 "nqn": "nqn.2016-06.io.spdk:cnode6779", 00:08:56.566 "max_cntlid": 65520, 00:08:56.566 "method": "nvmf_create_subsystem", 00:08:56.566 "req_id": 1 00:08:56.566 } 00:08:56.566 Got JSON-RPC error response 00:08:56.566 response: 00:08:56.566 { 00:08:56.566 "code": -32602, 00:08:56.566 "message": "Invalid cntlid range [1-65520]" 00:08:56.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21433 -i 6 -I 5 00:08:56.566 [2024-07-15 21:46:50.721613] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21433: invalid cntlid range [6-5] 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:56.566 { 00:08:56.566 "nqn": "nqn.2016-06.io.spdk:cnode21433", 00:08:56.566 "min_cntlid": 6, 00:08:56.566 "max_cntlid": 5, 00:08:56.566 "method": "nvmf_create_subsystem", 00:08:56.566 "req_id": 1 00:08:56.566 } 00:08:56.566 Got JSON-RPC error response 00:08:56.566 response: 00:08:56.566 { 00:08:56.566 "code": -32602, 00:08:56.566 "message": "Invalid cntlid range [6-5]" 00:08:56.566 }' 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:56.566 { 00:08:56.566 "nqn": "nqn.2016-06.io.spdk:cnode21433", 00:08:56.566 "min_cntlid": 6, 00:08:56.566 "max_cntlid": 5, 00:08:56.566 "method": "nvmf_create_subsystem", 00:08:56.566 "req_id": 1 00:08:56.566 } 00:08:56.566 Got JSON-RPC error response 00:08:56.566 response: 00:08:56.566 { 00:08:56.566 "code": -32602, 00:08:56.566 "message": "Invalid cntlid range [6-5]" 00:08:56.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:56.566 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:56.825 { 00:08:56.825 "name": "foobar", 00:08:56.825 "method": "nvmf_delete_target", 00:08:56.825 "req_id": 1 00:08:56.825 } 00:08:56.825 Got JSON-RPC error response 00:08:56.825 response: 00:08:56.825 { 00:08:56.825 "code": -32602, 00:08:56.825 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:56.825 }' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:56.825 { 00:08:56.825 "name": "foobar", 00:08:56.825 "method": "nvmf_delete_target", 00:08:56.825 "req_id": 1 00:08:56.825 } 00:08:56.825 Got JSON-RPC error response 00:08:56.825 response: 00:08:56.825 { 00:08:56.825 "code": -32602, 00:08:56.825 "message": "The specified target doesn't exist, cannot delete it." 00:08:56.825 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.825 rmmod nvme_tcp 00:08:56.825 rmmod nvme_fabrics 00:08:56.825 rmmod nvme_keyring 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3575560 ']' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3575560 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3575560 ']' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3575560 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3575560 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3575560' 00:08:56.825 killing process with pid 3575560 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3575560 00:08:56.825 21:46:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3575560 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.084 21:46:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.990 21:46:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.990 00:08:58.990 real 0m11.726s 00:08:58.990 user 0m19.480s 00:08:58.990 sys 0m5.030s 00:08:58.990 21:46:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.990 21:46:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:58.990 ************************************ 00:08:58.990 END TEST nvmf_invalid 00:08:58.990 ************************************ 00:08:59.248 21:46:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:59.248 21:46:53 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:59.248 21:46:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.248 21:46:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.248 21:46:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.248 ************************************ 00:08:59.248 START TEST nvmf_abort 00:08:59.248 ************************************ 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:59.248 * Looking for test storage... 00:08:59.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.248 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.249 21:46:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:04.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:04.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:04.522 Found net devices under 0000:86:00.0: cvl_0_0 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.522 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:04.522 Found net devices under 0000:86:00.1: cvl_0_1 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:09:04.523 00:09:04.523 --- 10.0.0.2 ping statistics --- 00:09:04.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.523 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:09:04.523 00:09:04.523 --- 10.0.0.1 ping statistics --- 00:09:04.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.523 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3579877 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3579877 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3579877 ']' 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.523 21:46:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.782 [2024-07-15 21:46:58.788054] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:04.782 [2024-07-15 21:46:58.788095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.782 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.782 [2024-07-15 21:46:58.844546] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.782 [2024-07-15 21:46:58.924339] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.782 [2024-07-15 21:46:58.924373] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.782 [2024-07-15 21:46:58.924381] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.782 [2024-07-15 21:46:58.924387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.782 [2024-07-15 21:46:58.924393] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.782 [2024-07-15 21:46:58.924487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.782 [2024-07-15 21:46:58.924571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.782 [2024-07-15 21:46:58.924572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 [2024-07-15 21:46:59.649840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 Malloc0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 Delay0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 [2024-07-15 21:46:59.724623] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.721 21:46:59 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:05.721 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.721 [2024-07-15 21:46:59.835969] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.256 Initializing NVMe Controllers 00:09:08.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.256 controller IO queue size 128 less than required 00:09:08.256 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:08.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:08.256 Initialization complete. Launching workers. 00:09:08.256 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 42600 00:09:08.256 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42664, failed to submit 62 00:09:08.256 success 42604, unsuccess 60, failed 0 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.256 rmmod nvme_tcp 00:09:08.256 rmmod nvme_fabrics 00:09:08.256 rmmod nvme_keyring 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3579877 ']' 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3579877 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3579877 ']' 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3579877 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.256 21:47:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3579877 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3579877' 00:09:08.256 killing process with pid 3579877 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3579877 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3579877 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.256 21:47:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.257 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.257 21:47:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.177 21:47:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.177 00:09:10.177 real 0m11.007s 00:09:10.177 user 0m13.034s 00:09:10.177 sys 0m4.920s 00:09:10.177 21:47:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.177 21:47:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.177 ************************************ 00:09:10.177 END TEST nvmf_abort 00:09:10.177 ************************************ 00:09:10.177 21:47:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.177 21:47:04 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:10.177 21:47:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.177 21:47:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.177 21:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.177 ************************************ 00:09:10.177 START TEST nvmf_ns_hotplug_stress 00:09:10.177 ************************************ 00:09:10.177 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:10.436 * Looking for test storage... 00:09:10.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.436 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.437 21:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.711 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.711 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.711 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:15.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:15.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:15.712 Found net devices under 0000:86:00.0: cvl_0_0 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:15.712 Found net devices under 0000:86:00.1: cvl_0_1 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.712 21:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:09:15.712 00:09:15.712 --- 10.0.0.2 ping statistics --- 00:09:15.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.712 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:15.712 00:09:15.712 --- 10.0.0.1 ping statistics --- 00:09:15.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.712 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3583765 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3583765 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3583765 ']' 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.712 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.713 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.713 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.713 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:15.713 21:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.713 [2024-07-15 21:47:09.264103] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:09:15.713 [2024-07-15 21:47:09.264147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.713 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.713 [2024-07-15 21:47:09.322090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.713 [2024-07-15 21:47:09.401914] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.713 [2024-07-15 21:47:09.401949] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.713 [2024-07-15 21:47:09.401957] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.713 [2024-07-15 21:47:09.401963] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.713 [2024-07-15 21:47:09.401971] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.713 [2024-07-15 21:47:09.402007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.713 [2024-07-15 21:47:09.402090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.713 [2024-07-15 21:47:09.402091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:15.971 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.230 [2024-07-15 21:47:10.267656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.230 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:16.489 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.489 [2024-07-15 21:47:10.649132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.489 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.749 21:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:17.008 Malloc0 00:09:17.008 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.008 Delay0 00:09:17.008 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.267 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:17.526 NULL1 00:09:17.526 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:17.785 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:17.785 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3584146 00:09:17.785 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:17.785 21:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.785 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.722 Read completed with error (sct=0, sc=11) 00:09:18.722 21:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.981 21:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:18.981 21:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:19.239 true 00:09:19.239 21:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:19.239 21:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.236 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.236 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:20.236 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:20.236 true 00:09:20.494 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:20.494 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.494 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.754 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:20.754 21:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:21.013 true 00:09:21.013 21:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:21.013 21:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.397 21:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.397 21:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:22.397 21:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:22.397 true 00:09:22.397 21:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:22.397 21:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.334 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.593 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:23.593 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:23.593 true 00:09:23.593 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:23.593 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.852 21:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.111 21:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:24.112 21:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:24.112 true 00:09:24.112 21:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:24.112 21:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.491 21:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.749 21:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:25.749 21:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:25.749 true 00:09:25.749 21:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:25.749 21:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.685 21:47:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.685 21:47:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:26.944 21:47:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:26.944 true 00:09:26.944 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:26.944 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.203 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.462 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:27.462 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:27.462 true 00:09:27.462 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:27.462 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.721 21:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.981 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:27.981 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:27.981 true 00:09:27.981 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:27.981 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.240 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.498 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:28.498 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:28.498 true 00:09:28.757 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:28.757 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.757 21:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.016 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:29.016 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:29.274 true 00:09:29.274 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:29.274 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.274 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.532 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:29.532 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:29.791 true 00:09:29.791 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:29.791 21:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.168 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.168 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:31.168 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:31.168 true 00:09:31.168 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:31.168 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.427 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.686 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:31.686 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:31.686 true 00:09:31.945 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:31.945 21:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.945 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.205 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:32.205 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:32.464 true 00:09:32.464 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:32.464 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.464 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.722 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:32.722 21:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:32.980 true 00:09:32.980 21:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:32.980 21:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 21:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.358 21:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:34.358 21:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:34.358 true 00:09:34.358 21:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:34.358 21:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.295 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.553 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:35.553 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:35.553 true 00:09:35.553 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:35.553 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.811 21:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.070 21:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:36.070 21:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:36.070 true 00:09:36.070 21:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:36.070 21:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 21:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.451 21:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:37.451 21:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:37.709 true 00:09:37.709 21:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:37.709 21:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.675 21:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.675 21:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:38.675 21:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:38.934 true 00:09:38.934 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:38.934 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.192 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.192 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:39.192 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:39.452 true 00:09:39.452 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:39.452 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.711 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.711 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:39.711 21:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:39.969 true 00:09:39.969 21:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:39.969 21:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.905 21:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.905 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:40.906 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:41.162 true 00:09:41.162 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:41.162 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.424 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.424 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:41.424 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:41.686 true 00:09:41.686 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:41.686 21:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.943 21:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.202 21:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:42.202 21:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:42.202 true 00:09:42.202 21:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:42.202 21:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.134 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.392 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:43.392 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:43.392 true 00:09:43.392 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:43.392 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.650 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.907 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:43.907 21:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:43.907 true 00:09:44.165 21:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:44.165 21:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.097 21:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.356 21:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:45.356 21:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:45.613 true 00:09:45.613 21:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:45.613 21:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.549 21:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.549 21:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:46.549 21:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:46.806 true 00:09:46.806 21:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:46.806 21:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.064 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.064 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:47.064 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:47.376 true 00:09:47.376 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:47.376 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.634 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.634 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:47.634 21:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:47.892 true 00:09:47.892 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:47.892 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.892 Initializing NVMe Controllers 00:09:47.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.892 Controller IO queue size 128, less than required. 00:09:47.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:47.892 Controller IO queue size 128, less than required. 00:09:47.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:47.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:47.892 Initialization complete. Launching workers. 00:09:47.892 ======================================================== 00:09:47.892 Latency(us) 00:09:47.892 Device Information : IOPS MiB/s Average min max 00:09:47.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1819.99 0.89 40923.89 1775.18 1063221.04 00:09:47.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14881.85 7.27 8601.57 2136.15 381219.81 00:09:47.892 ======================================================== 00:09:47.892 Total : 16701.84 8.16 12123.71 1775.18 1063221.04 00:09:47.892 00:09:48.151 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.151 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:48.151 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:48.409 true 00:09:48.409 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3584146 00:09:48.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3584146) - No such process 00:09:48.409 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3584146 00:09:48.409 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.668 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.926 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:48.926 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:48.927 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:48.927 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.927 21:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:48.927 null0 00:09:48.927 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.927 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.927 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:49.185 null1 00:09:49.185 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:49.185 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:49.185 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:49.443 null2 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:49.443 null3 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:49.443 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:49.702 null4 00:09:49.702 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:49.702 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:49.702 21:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:49.960 null5 00:09:49.960 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:49.960 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:49.960 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:50.221 null6 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:50.221 null7 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3589745 3589746 3589748 3589750 3589752 3589754 3589756 3589757 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.221 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.481 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.740 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.999 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.999 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.999 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.999 21:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.999 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.258 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.518 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.777 21:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.036 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.295 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.554 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.555 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.555 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.555 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.555 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.814 21:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.814 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.073 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.333 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.592 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.851 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.851 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.851 21:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.851 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.851 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.851 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.851 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.851 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.110 rmmod nvme_tcp 00:09:54.110 rmmod nvme_fabrics 00:09:54.110 rmmod nvme_keyring 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3583765 ']' 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3583765 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3583765 ']' 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3583765 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583765 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583765' 00:09:54.110 killing process with pid 3583765 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3583765 00:09:54.110 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3583765 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.369 21:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.975 21:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.975 00:09:56.975 real 0m46.251s 00:09:56.975 user 3m12.564s 00:09:56.975 sys 0m14.551s 00:09:56.975 21:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.975 21:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.975 ************************************ 00:09:56.975 END TEST nvmf_ns_hotplug_stress 00:09:56.975 ************************************ 00:09:56.975 21:47:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.975 21:47:50 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:56.975 21:47:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.975 21:47:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.975 21:47:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.975 ************************************ 00:09:56.975 START TEST nvmf_connect_stress 00:09:56.975 ************************************ 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:56.975 * Looking for test storage... 00:09:56.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.975 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.976 21:47:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.244 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:02.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:02.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:02.245 Found net devices under 0000:86:00.0: cvl_0_0 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:02.245 Found net devices under 0000:86:00.1: cvl_0_1 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.245 21:47:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:10:02.245 00:10:02.245 --- 10.0.0.2 ping statistics --- 00:10:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.245 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:02.245 00:10:02.245 --- 10.0.0.1 ping statistics --- 00:10:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.245 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3594036 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3594036 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3594036 ']' 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.245 21:47:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.245 [2024-07-15 21:47:56.252675] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:02.245 [2024-07-15 21:47:56.252718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.245 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.245 [2024-07-15 21:47:56.310648] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.245 [2024-07-15 21:47:56.390032] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.245 [2024-07-15 21:47:56.390066] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.245 [2024-07-15 21:47:56.390073] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.245 [2024-07-15 21:47:56.390079] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.245 [2024-07-15 21:47:56.390084] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.245 [2024-07-15 21:47:56.390180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.245 [2024-07-15 21:47:56.390285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.245 [2024-07-15 21:47:56.390286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.183 [2024-07-15 21:47:57.111528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.183 [2024-07-15 21:47:57.140323] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.183 NULL1 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3594147 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.183 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.184 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.443 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.443 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:03.443 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.443 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.443 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.702 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.702 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:03.702 21:47:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.702 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.702 21:47:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.269 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.269 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:04.269 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.269 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.269 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.527 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.527 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:04.527 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.527 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.527 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.786 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.786 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:04.786 21:47:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.786 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.786 21:47:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.045 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.045 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:05.045 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.045 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.045 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.304 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.304 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:05.304 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.304 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.304 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.872 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.872 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:05.872 21:47:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.872 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.872 21:47:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.130 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.130 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:06.130 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.130 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.130 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.388 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.388 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:06.388 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.388 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.388 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.647 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.647 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:06.647 21:48:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.647 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.647 21:48:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.905 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:06.905 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.905 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.905 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.472 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.472 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:07.472 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.472 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.472 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.730 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.730 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:07.730 21:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.731 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.731 21:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.989 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.989 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:07.989 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.989 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.989 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.246 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.246 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:08.246 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.246 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.246 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.503 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.503 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:08.761 21:48:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.761 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.761 21:48:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.019 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.019 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:09.019 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.019 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.019 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.277 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.277 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:09.277 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.277 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.277 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.535 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.535 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:09.535 21:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.535 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.535 21:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.101 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.101 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:10.101 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.101 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.101 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.359 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.359 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:10.359 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.359 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.359 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.617 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.617 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:10.617 21:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.617 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.617 21:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.876 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.876 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:10.876 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.876 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.876 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.134 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.134 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:11.134 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.134 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.134 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.702 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.702 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:11.702 21:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.702 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.702 21:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.960 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.960 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:11.960 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.960 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.960 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.219 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.219 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:12.219 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.219 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.219 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.478 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.478 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:12.478 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.478 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.478 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.046 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.046 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:13.046 21:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.046 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.046 21:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.305 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.305 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:13.305 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.305 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.305 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.305 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3594147 00:10:13.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3594147) - No such process 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3594147 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.564 rmmod nvme_tcp 00:10:13.564 rmmod nvme_fabrics 00:10:13.564 rmmod nvme_keyring 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3594036 ']' 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3594036 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3594036 ']' 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3594036 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594036 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594036' 00:10:13.564 killing process with pid 3594036 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3594036 00:10:13.564 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3594036 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.823 21:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.357 21:48:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:16.357 00:10:16.357 real 0m19.323s 00:10:16.357 user 0m42.264s 00:10:16.357 sys 0m7.835s 00:10:16.357 21:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.357 21:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.357 ************************************ 00:10:16.357 END TEST nvmf_connect_stress 00:10:16.357 ************************************ 00:10:16.357 21:48:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:16.357 21:48:10 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:16.357 21:48:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.357 21:48:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.357 21:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.357 ************************************ 00:10:16.357 START TEST nvmf_fused_ordering 00:10:16.357 ************************************ 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:16.357 * Looking for test storage... 00:10:16.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.357 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:16.358 21:48:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:21.707 21:48:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:21.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:21.707 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:21.707 Found net devices under 0000:86:00.0: cvl_0_0 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:21.707 Found net devices under 0000:86:00.1: cvl_0_1 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.707 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:10:21.708 00:10:21.708 --- 10.0.0.2 ping statistics --- 00:10:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.708 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:10:21.708 00:10:21.708 --- 10.0.0.1 ping statistics --- 00:10:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.708 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3599803 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3599803 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3599803 ']' 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.708 21:48:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.708 [2024-07-15 21:48:15.334137] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:21.708 [2024-07-15 21:48:15.334186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.708 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.708 [2024-07-15 21:48:15.392834] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.708 [2024-07-15 21:48:15.471329] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.708 [2024-07-15 21:48:15.471364] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.708 [2024-07-15 21:48:15.471371] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.708 [2024-07-15 21:48:15.471378] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.708 [2024-07-15 21:48:15.471384] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.708 [2024-07-15 21:48:15.471406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 [2024-07-15 21:48:16.175760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 [2024-07-15 21:48:16.191875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 NULL1 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.966 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.224 21:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:22.224 [2024-07-15 21:48:16.243514] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:22.224 [2024-07-15 21:48:16.243546] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600050 ] 00:10:22.224 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.483 Attached to nqn.2016-06.io.spdk:cnode1 00:10:22.483 Namespace ID: 1 size: 1GB 00:10:22.483 fused_ordering(0) 00:10:22.483 fused_ordering(1) 00:10:22.483 fused_ordering(2) 00:10:22.483 fused_ordering(3) 00:10:22.483 fused_ordering(4) 00:10:22.483 fused_ordering(5) 00:10:22.483 fused_ordering(6) 00:10:22.483 fused_ordering(7) 00:10:22.483 fused_ordering(8) 00:10:22.483 fused_ordering(9) 00:10:22.483 fused_ordering(10) 00:10:22.483 fused_ordering(11) 00:10:22.483 fused_ordering(12) 00:10:22.483 fused_ordering(13) 00:10:22.483 fused_ordering(14) 00:10:22.483 fused_ordering(15) 00:10:22.483 fused_ordering(16) 00:10:22.483 fused_ordering(17) 00:10:22.483 fused_ordering(18) 00:10:22.483 fused_ordering(19) 00:10:22.483 fused_ordering(20) 00:10:22.483 fused_ordering(21) 00:10:22.483 fused_ordering(22) 00:10:22.483 fused_ordering(23) 00:10:22.483 fused_ordering(24) 00:10:22.483 fused_ordering(25) 00:10:22.483 fused_ordering(26) 00:10:22.483 fused_ordering(27) 00:10:22.483 fused_ordering(28) 00:10:22.483 fused_ordering(29) 00:10:22.483 fused_ordering(30) 00:10:22.483 fused_ordering(31) 00:10:22.483 fused_ordering(32) 00:10:22.483 fused_ordering(33) 00:10:22.483 fused_ordering(34) 00:10:22.483 fused_ordering(35) 00:10:22.483 fused_ordering(36) 00:10:22.483 fused_ordering(37) 00:10:22.483 fused_ordering(38) 00:10:22.483 fused_ordering(39) 00:10:22.483 fused_ordering(40) 00:10:22.483 fused_ordering(41) 00:10:22.483 fused_ordering(42) 00:10:22.483 fused_ordering(43) 00:10:22.483 fused_ordering(44) 00:10:22.483 fused_ordering(45) 00:10:22.483 fused_ordering(46) 00:10:22.483 fused_ordering(47) 00:10:22.483 fused_ordering(48) 00:10:22.483 fused_ordering(49) 00:10:22.483 fused_ordering(50) 00:10:22.483 fused_ordering(51) 00:10:22.483 fused_ordering(52) 00:10:22.483 fused_ordering(53) 00:10:22.483 fused_ordering(54) 00:10:22.483 fused_ordering(55) 00:10:22.483 fused_ordering(56) 00:10:22.483 fused_ordering(57) 00:10:22.483 fused_ordering(58) 00:10:22.483 fused_ordering(59) 00:10:22.483 fused_ordering(60) 00:10:22.483 fused_ordering(61) 00:10:22.483 fused_ordering(62) 00:10:22.483 fused_ordering(63) 00:10:22.483 fused_ordering(64) 00:10:22.483 fused_ordering(65) 00:10:22.483 fused_ordering(66) 00:10:22.483 fused_ordering(67) 00:10:22.483 fused_ordering(68) 00:10:22.483 fused_ordering(69) 00:10:22.483 fused_ordering(70) 00:10:22.483 fused_ordering(71) 00:10:22.483 fused_ordering(72) 00:10:22.483 fused_ordering(73) 00:10:22.483 fused_ordering(74) 00:10:22.483 fused_ordering(75) 00:10:22.483 fused_ordering(76) 00:10:22.483 fused_ordering(77) 00:10:22.483 fused_ordering(78) 00:10:22.483 fused_ordering(79) 00:10:22.483 fused_ordering(80) 00:10:22.483 fused_ordering(81) 00:10:22.483 fused_ordering(82) 00:10:22.483 fused_ordering(83) 00:10:22.483 fused_ordering(84) 00:10:22.483 fused_ordering(85) 00:10:22.483 fused_ordering(86) 00:10:22.483 fused_ordering(87) 00:10:22.484 fused_ordering(88) 00:10:22.484 fused_ordering(89) 00:10:22.484 fused_ordering(90) 00:10:22.484 fused_ordering(91) 00:10:22.484 fused_ordering(92) 00:10:22.484 fused_ordering(93) 00:10:22.484 fused_ordering(94) 00:10:22.484 fused_ordering(95) 00:10:22.484 fused_ordering(96) 00:10:22.484 fused_ordering(97) 00:10:22.484 fused_ordering(98) 00:10:22.484 fused_ordering(99) 00:10:22.484 fused_ordering(100) 00:10:22.484 fused_ordering(101) 00:10:22.484 fused_ordering(102) 00:10:22.484 fused_ordering(103) 00:10:22.484 fused_ordering(104) 00:10:22.484 fused_ordering(105) 00:10:22.484 fused_ordering(106) 00:10:22.484 fused_ordering(107) 00:10:22.484 fused_ordering(108) 00:10:22.484 fused_ordering(109) 00:10:22.484 fused_ordering(110) 00:10:22.484 fused_ordering(111) 00:10:22.484 fused_ordering(112) 00:10:22.484 fused_ordering(113) 00:10:22.484 fused_ordering(114) 00:10:22.484 fused_ordering(115) 00:10:22.484 fused_ordering(116) 00:10:22.484 fused_ordering(117) 00:10:22.484 fused_ordering(118) 00:10:22.484 fused_ordering(119) 00:10:22.484 fused_ordering(120) 00:10:22.484 fused_ordering(121) 00:10:22.484 fused_ordering(122) 00:10:22.484 fused_ordering(123) 00:10:22.484 fused_ordering(124) 00:10:22.484 fused_ordering(125) 00:10:22.484 fused_ordering(126) 00:10:22.484 fused_ordering(127) 00:10:22.484 fused_ordering(128) 00:10:22.484 fused_ordering(129) 00:10:22.484 fused_ordering(130) 00:10:22.484 fused_ordering(131) 00:10:22.484 fused_ordering(132) 00:10:22.484 fused_ordering(133) 00:10:22.484 fused_ordering(134) 00:10:22.484 fused_ordering(135) 00:10:22.484 fused_ordering(136) 00:10:22.484 fused_ordering(137) 00:10:22.484 fused_ordering(138) 00:10:22.484 fused_ordering(139) 00:10:22.484 fused_ordering(140) 00:10:22.484 fused_ordering(141) 00:10:22.484 fused_ordering(142) 00:10:22.484 fused_ordering(143) 00:10:22.484 fused_ordering(144) 00:10:22.484 fused_ordering(145) 00:10:22.484 fused_ordering(146) 00:10:22.484 fused_ordering(147) 00:10:22.484 fused_ordering(148) 00:10:22.484 fused_ordering(149) 00:10:22.484 fused_ordering(150) 00:10:22.484 fused_ordering(151) 00:10:22.484 fused_ordering(152) 00:10:22.484 fused_ordering(153) 00:10:22.484 fused_ordering(154) 00:10:22.484 fused_ordering(155) 00:10:22.484 fused_ordering(156) 00:10:22.484 fused_ordering(157) 00:10:22.484 fused_ordering(158) 00:10:22.484 fused_ordering(159) 00:10:22.484 fused_ordering(160) 00:10:22.484 fused_ordering(161) 00:10:22.484 fused_ordering(162) 00:10:22.484 fused_ordering(163) 00:10:22.484 fused_ordering(164) 00:10:22.484 fused_ordering(165) 00:10:22.484 fused_ordering(166) 00:10:22.484 fused_ordering(167) 00:10:22.484 fused_ordering(168) 00:10:22.484 fused_ordering(169) 00:10:22.484 fused_ordering(170) 00:10:22.484 fused_ordering(171) 00:10:22.484 fused_ordering(172) 00:10:22.484 fused_ordering(173) 00:10:22.484 fused_ordering(174) 00:10:22.484 fused_ordering(175) 00:10:22.484 fused_ordering(176) 00:10:22.484 fused_ordering(177) 00:10:22.484 fused_ordering(178) 00:10:22.484 fused_ordering(179) 00:10:22.484 fused_ordering(180) 00:10:22.484 fused_ordering(181) 00:10:22.484 fused_ordering(182) 00:10:22.484 fused_ordering(183) 00:10:22.484 fused_ordering(184) 00:10:22.484 fused_ordering(185) 00:10:22.484 fused_ordering(186) 00:10:22.484 fused_ordering(187) 00:10:22.484 fused_ordering(188) 00:10:22.484 fused_ordering(189) 00:10:22.484 fused_ordering(190) 00:10:22.484 fused_ordering(191) 00:10:22.484 fused_ordering(192) 00:10:22.484 fused_ordering(193) 00:10:22.484 fused_ordering(194) 00:10:22.484 fused_ordering(195) 00:10:22.484 fused_ordering(196) 00:10:22.484 fused_ordering(197) 00:10:22.484 fused_ordering(198) 00:10:22.484 fused_ordering(199) 00:10:22.484 fused_ordering(200) 00:10:22.484 fused_ordering(201) 00:10:22.484 fused_ordering(202) 00:10:22.484 fused_ordering(203) 00:10:22.484 fused_ordering(204) 00:10:22.484 fused_ordering(205) 00:10:22.743 fused_ordering(206) 00:10:22.743 fused_ordering(207) 00:10:22.743 fused_ordering(208) 00:10:22.743 fused_ordering(209) 00:10:22.743 fused_ordering(210) 00:10:22.743 fused_ordering(211) 00:10:22.743 fused_ordering(212) 00:10:22.743 fused_ordering(213) 00:10:22.743 fused_ordering(214) 00:10:22.743 fused_ordering(215) 00:10:22.743 fused_ordering(216) 00:10:22.743 fused_ordering(217) 00:10:22.743 fused_ordering(218) 00:10:22.743 fused_ordering(219) 00:10:22.743 fused_ordering(220) 00:10:22.743 fused_ordering(221) 00:10:22.743 fused_ordering(222) 00:10:22.743 fused_ordering(223) 00:10:22.743 fused_ordering(224) 00:10:22.743 fused_ordering(225) 00:10:22.743 fused_ordering(226) 00:10:22.743 fused_ordering(227) 00:10:22.743 fused_ordering(228) 00:10:22.743 fused_ordering(229) 00:10:22.743 fused_ordering(230) 00:10:22.743 fused_ordering(231) 00:10:22.743 fused_ordering(232) 00:10:22.743 fused_ordering(233) 00:10:22.743 fused_ordering(234) 00:10:22.743 fused_ordering(235) 00:10:22.743 fused_ordering(236) 00:10:22.743 fused_ordering(237) 00:10:22.743 fused_ordering(238) 00:10:22.743 fused_ordering(239) 00:10:22.743 fused_ordering(240) 00:10:22.743 fused_ordering(241) 00:10:22.743 fused_ordering(242) 00:10:22.743 fused_ordering(243) 00:10:22.743 fused_ordering(244) 00:10:22.743 fused_ordering(245) 00:10:22.743 fused_ordering(246) 00:10:22.743 fused_ordering(247) 00:10:22.743 fused_ordering(248) 00:10:22.743 fused_ordering(249) 00:10:22.743 fused_ordering(250) 00:10:22.743 fused_ordering(251) 00:10:22.743 fused_ordering(252) 00:10:22.743 fused_ordering(253) 00:10:22.743 fused_ordering(254) 00:10:22.743 fused_ordering(255) 00:10:22.743 fused_ordering(256) 00:10:22.743 fused_ordering(257) 00:10:22.743 fused_ordering(258) 00:10:22.743 fused_ordering(259) 00:10:22.743 fused_ordering(260) 00:10:22.743 fused_ordering(261) 00:10:22.743 fused_ordering(262) 00:10:22.743 fused_ordering(263) 00:10:22.743 fused_ordering(264) 00:10:22.743 fused_ordering(265) 00:10:22.743 fused_ordering(266) 00:10:22.743 fused_ordering(267) 00:10:22.743 fused_ordering(268) 00:10:22.743 fused_ordering(269) 00:10:22.743 fused_ordering(270) 00:10:22.743 fused_ordering(271) 00:10:22.743 fused_ordering(272) 00:10:22.743 fused_ordering(273) 00:10:22.743 fused_ordering(274) 00:10:22.743 fused_ordering(275) 00:10:22.743 fused_ordering(276) 00:10:22.743 fused_ordering(277) 00:10:22.743 fused_ordering(278) 00:10:22.743 fused_ordering(279) 00:10:22.743 fused_ordering(280) 00:10:22.743 fused_ordering(281) 00:10:22.743 fused_ordering(282) 00:10:22.743 fused_ordering(283) 00:10:22.743 fused_ordering(284) 00:10:22.743 fused_ordering(285) 00:10:22.743 fused_ordering(286) 00:10:22.743 fused_ordering(287) 00:10:22.743 fused_ordering(288) 00:10:22.743 fused_ordering(289) 00:10:22.743 fused_ordering(290) 00:10:22.743 fused_ordering(291) 00:10:22.743 fused_ordering(292) 00:10:22.743 fused_ordering(293) 00:10:22.743 fused_ordering(294) 00:10:22.743 fused_ordering(295) 00:10:22.743 fused_ordering(296) 00:10:22.743 fused_ordering(297) 00:10:22.743 fused_ordering(298) 00:10:22.743 fused_ordering(299) 00:10:22.743 fused_ordering(300) 00:10:22.743 fused_ordering(301) 00:10:22.743 fused_ordering(302) 00:10:22.743 fused_ordering(303) 00:10:22.743 fused_ordering(304) 00:10:22.743 fused_ordering(305) 00:10:22.743 fused_ordering(306) 00:10:22.743 fused_ordering(307) 00:10:22.743 fused_ordering(308) 00:10:22.743 fused_ordering(309) 00:10:22.743 fused_ordering(310) 00:10:22.743 fused_ordering(311) 00:10:22.743 fused_ordering(312) 00:10:22.743 fused_ordering(313) 00:10:22.743 fused_ordering(314) 00:10:22.743 fused_ordering(315) 00:10:22.743 fused_ordering(316) 00:10:22.743 fused_ordering(317) 00:10:22.743 fused_ordering(318) 00:10:22.743 fused_ordering(319) 00:10:22.743 fused_ordering(320) 00:10:22.743 fused_ordering(321) 00:10:22.743 fused_ordering(322) 00:10:22.743 fused_ordering(323) 00:10:22.743 fused_ordering(324) 00:10:22.743 fused_ordering(325) 00:10:22.743 fused_ordering(326) 00:10:22.743 fused_ordering(327) 00:10:22.743 fused_ordering(328) 00:10:22.743 fused_ordering(329) 00:10:22.743 fused_ordering(330) 00:10:22.743 fused_ordering(331) 00:10:22.743 fused_ordering(332) 00:10:22.743 fused_ordering(333) 00:10:22.743 fused_ordering(334) 00:10:22.743 fused_ordering(335) 00:10:22.743 fused_ordering(336) 00:10:22.743 fused_ordering(337) 00:10:22.743 fused_ordering(338) 00:10:22.743 fused_ordering(339) 00:10:22.743 fused_ordering(340) 00:10:22.743 fused_ordering(341) 00:10:22.743 fused_ordering(342) 00:10:22.743 fused_ordering(343) 00:10:22.743 fused_ordering(344) 00:10:22.743 fused_ordering(345) 00:10:22.743 fused_ordering(346) 00:10:22.743 fused_ordering(347) 00:10:22.743 fused_ordering(348) 00:10:22.743 fused_ordering(349) 00:10:22.743 fused_ordering(350) 00:10:22.743 fused_ordering(351) 00:10:22.743 fused_ordering(352) 00:10:22.743 fused_ordering(353) 00:10:22.743 fused_ordering(354) 00:10:22.743 fused_ordering(355) 00:10:22.743 fused_ordering(356) 00:10:22.743 fused_ordering(357) 00:10:22.743 fused_ordering(358) 00:10:22.743 fused_ordering(359) 00:10:22.743 fused_ordering(360) 00:10:22.743 fused_ordering(361) 00:10:22.743 fused_ordering(362) 00:10:22.743 fused_ordering(363) 00:10:22.743 fused_ordering(364) 00:10:22.743 fused_ordering(365) 00:10:22.743 fused_ordering(366) 00:10:22.743 fused_ordering(367) 00:10:22.743 fused_ordering(368) 00:10:22.743 fused_ordering(369) 00:10:22.743 fused_ordering(370) 00:10:22.743 fused_ordering(371) 00:10:22.743 fused_ordering(372) 00:10:22.743 fused_ordering(373) 00:10:22.743 fused_ordering(374) 00:10:22.743 fused_ordering(375) 00:10:22.743 fused_ordering(376) 00:10:22.743 fused_ordering(377) 00:10:22.743 fused_ordering(378) 00:10:22.743 fused_ordering(379) 00:10:22.743 fused_ordering(380) 00:10:22.743 fused_ordering(381) 00:10:22.743 fused_ordering(382) 00:10:22.743 fused_ordering(383) 00:10:22.743 fused_ordering(384) 00:10:22.743 fused_ordering(385) 00:10:22.743 fused_ordering(386) 00:10:22.743 fused_ordering(387) 00:10:22.743 fused_ordering(388) 00:10:22.743 fused_ordering(389) 00:10:22.743 fused_ordering(390) 00:10:22.743 fused_ordering(391) 00:10:22.743 fused_ordering(392) 00:10:22.743 fused_ordering(393) 00:10:22.743 fused_ordering(394) 00:10:22.743 fused_ordering(395) 00:10:22.743 fused_ordering(396) 00:10:22.743 fused_ordering(397) 00:10:22.743 fused_ordering(398) 00:10:22.743 fused_ordering(399) 00:10:22.743 fused_ordering(400) 00:10:22.743 fused_ordering(401) 00:10:22.743 fused_ordering(402) 00:10:22.743 fused_ordering(403) 00:10:22.743 fused_ordering(404) 00:10:22.743 fused_ordering(405) 00:10:22.743 fused_ordering(406) 00:10:22.743 fused_ordering(407) 00:10:22.743 fused_ordering(408) 00:10:22.743 fused_ordering(409) 00:10:22.743 fused_ordering(410) 00:10:23.002 fused_ordering(411) 00:10:23.002 fused_ordering(412) 00:10:23.002 fused_ordering(413) 00:10:23.002 fused_ordering(414) 00:10:23.002 fused_ordering(415) 00:10:23.002 fused_ordering(416) 00:10:23.002 fused_ordering(417) 00:10:23.002 fused_ordering(418) 00:10:23.002 fused_ordering(419) 00:10:23.002 fused_ordering(420) 00:10:23.002 fused_ordering(421) 00:10:23.002 fused_ordering(422) 00:10:23.002 fused_ordering(423) 00:10:23.002 fused_ordering(424) 00:10:23.002 fused_ordering(425) 00:10:23.002 fused_ordering(426) 00:10:23.002 fused_ordering(427) 00:10:23.002 fused_ordering(428) 00:10:23.002 fused_ordering(429) 00:10:23.002 fused_ordering(430) 00:10:23.002 fused_ordering(431) 00:10:23.002 fused_ordering(432) 00:10:23.002 fused_ordering(433) 00:10:23.002 fused_ordering(434) 00:10:23.002 fused_ordering(435) 00:10:23.002 fused_ordering(436) 00:10:23.002 fused_ordering(437) 00:10:23.002 fused_ordering(438) 00:10:23.002 fused_ordering(439) 00:10:23.002 fused_ordering(440) 00:10:23.002 fused_ordering(441) 00:10:23.002 fused_ordering(442) 00:10:23.002 fused_ordering(443) 00:10:23.002 fused_ordering(444) 00:10:23.002 fused_ordering(445) 00:10:23.002 fused_ordering(446) 00:10:23.002 fused_ordering(447) 00:10:23.002 fused_ordering(448) 00:10:23.002 fused_ordering(449) 00:10:23.002 fused_ordering(450) 00:10:23.002 fused_ordering(451) 00:10:23.002 fused_ordering(452) 00:10:23.002 fused_ordering(453) 00:10:23.002 fused_ordering(454) 00:10:23.002 fused_ordering(455) 00:10:23.002 fused_ordering(456) 00:10:23.002 fused_ordering(457) 00:10:23.002 fused_ordering(458) 00:10:23.002 fused_ordering(459) 00:10:23.002 fused_ordering(460) 00:10:23.002 fused_ordering(461) 00:10:23.002 fused_ordering(462) 00:10:23.002 fused_ordering(463) 00:10:23.002 fused_ordering(464) 00:10:23.002 fused_ordering(465) 00:10:23.002 fused_ordering(466) 00:10:23.002 fused_ordering(467) 00:10:23.002 fused_ordering(468) 00:10:23.002 fused_ordering(469) 00:10:23.002 fused_ordering(470) 00:10:23.002 fused_ordering(471) 00:10:23.002 fused_ordering(472) 00:10:23.002 fused_ordering(473) 00:10:23.002 fused_ordering(474) 00:10:23.002 fused_ordering(475) 00:10:23.002 fused_ordering(476) 00:10:23.002 fused_ordering(477) 00:10:23.002 fused_ordering(478) 00:10:23.002 fused_ordering(479) 00:10:23.002 fused_ordering(480) 00:10:23.002 fused_ordering(481) 00:10:23.002 fused_ordering(482) 00:10:23.002 fused_ordering(483) 00:10:23.002 fused_ordering(484) 00:10:23.002 fused_ordering(485) 00:10:23.002 fused_ordering(486) 00:10:23.002 fused_ordering(487) 00:10:23.002 fused_ordering(488) 00:10:23.002 fused_ordering(489) 00:10:23.002 fused_ordering(490) 00:10:23.002 fused_ordering(491) 00:10:23.002 fused_ordering(492) 00:10:23.002 fused_ordering(493) 00:10:23.002 fused_ordering(494) 00:10:23.002 fused_ordering(495) 00:10:23.002 fused_ordering(496) 00:10:23.002 fused_ordering(497) 00:10:23.002 fused_ordering(498) 00:10:23.002 fused_ordering(499) 00:10:23.002 fused_ordering(500) 00:10:23.002 fused_ordering(501) 00:10:23.002 fused_ordering(502) 00:10:23.002 fused_ordering(503) 00:10:23.002 fused_ordering(504) 00:10:23.002 fused_ordering(505) 00:10:23.002 fused_ordering(506) 00:10:23.002 fused_ordering(507) 00:10:23.002 fused_ordering(508) 00:10:23.002 fused_ordering(509) 00:10:23.002 fused_ordering(510) 00:10:23.002 fused_ordering(511) 00:10:23.002 fused_ordering(512) 00:10:23.002 fused_ordering(513) 00:10:23.002 fused_ordering(514) 00:10:23.002 fused_ordering(515) 00:10:23.002 fused_ordering(516) 00:10:23.002 fused_ordering(517) 00:10:23.002 fused_ordering(518) 00:10:23.002 fused_ordering(519) 00:10:23.002 fused_ordering(520) 00:10:23.002 fused_ordering(521) 00:10:23.002 fused_ordering(522) 00:10:23.002 fused_ordering(523) 00:10:23.002 fused_ordering(524) 00:10:23.002 fused_ordering(525) 00:10:23.002 fused_ordering(526) 00:10:23.002 fused_ordering(527) 00:10:23.002 fused_ordering(528) 00:10:23.002 fused_ordering(529) 00:10:23.002 fused_ordering(530) 00:10:23.002 fused_ordering(531) 00:10:23.002 fused_ordering(532) 00:10:23.002 fused_ordering(533) 00:10:23.002 fused_ordering(534) 00:10:23.002 fused_ordering(535) 00:10:23.002 fused_ordering(536) 00:10:23.002 fused_ordering(537) 00:10:23.002 fused_ordering(538) 00:10:23.002 fused_ordering(539) 00:10:23.002 fused_ordering(540) 00:10:23.002 fused_ordering(541) 00:10:23.002 fused_ordering(542) 00:10:23.002 fused_ordering(543) 00:10:23.002 fused_ordering(544) 00:10:23.002 fused_ordering(545) 00:10:23.002 fused_ordering(546) 00:10:23.002 fused_ordering(547) 00:10:23.002 fused_ordering(548) 00:10:23.002 fused_ordering(549) 00:10:23.002 fused_ordering(550) 00:10:23.002 fused_ordering(551) 00:10:23.002 fused_ordering(552) 00:10:23.002 fused_ordering(553) 00:10:23.002 fused_ordering(554) 00:10:23.002 fused_ordering(555) 00:10:23.002 fused_ordering(556) 00:10:23.002 fused_ordering(557) 00:10:23.002 fused_ordering(558) 00:10:23.002 fused_ordering(559) 00:10:23.002 fused_ordering(560) 00:10:23.002 fused_ordering(561) 00:10:23.002 fused_ordering(562) 00:10:23.002 fused_ordering(563) 00:10:23.002 fused_ordering(564) 00:10:23.002 fused_ordering(565) 00:10:23.002 fused_ordering(566) 00:10:23.002 fused_ordering(567) 00:10:23.002 fused_ordering(568) 00:10:23.002 fused_ordering(569) 00:10:23.002 fused_ordering(570) 00:10:23.002 fused_ordering(571) 00:10:23.002 fused_ordering(572) 00:10:23.002 fused_ordering(573) 00:10:23.002 fused_ordering(574) 00:10:23.002 fused_ordering(575) 00:10:23.002 fused_ordering(576) 00:10:23.002 fused_ordering(577) 00:10:23.002 fused_ordering(578) 00:10:23.002 fused_ordering(579) 00:10:23.002 fused_ordering(580) 00:10:23.002 fused_ordering(581) 00:10:23.002 fused_ordering(582) 00:10:23.002 fused_ordering(583) 00:10:23.002 fused_ordering(584) 00:10:23.002 fused_ordering(585) 00:10:23.002 fused_ordering(586) 00:10:23.002 fused_ordering(587) 00:10:23.002 fused_ordering(588) 00:10:23.002 fused_ordering(589) 00:10:23.002 fused_ordering(590) 00:10:23.002 fused_ordering(591) 00:10:23.002 fused_ordering(592) 00:10:23.002 fused_ordering(593) 00:10:23.002 fused_ordering(594) 00:10:23.002 fused_ordering(595) 00:10:23.002 fused_ordering(596) 00:10:23.002 fused_ordering(597) 00:10:23.002 fused_ordering(598) 00:10:23.002 fused_ordering(599) 00:10:23.003 fused_ordering(600) 00:10:23.003 fused_ordering(601) 00:10:23.003 fused_ordering(602) 00:10:23.003 fused_ordering(603) 00:10:23.003 fused_ordering(604) 00:10:23.003 fused_ordering(605) 00:10:23.003 fused_ordering(606) 00:10:23.003 fused_ordering(607) 00:10:23.003 fused_ordering(608) 00:10:23.003 fused_ordering(609) 00:10:23.003 fused_ordering(610) 00:10:23.003 fused_ordering(611) 00:10:23.003 fused_ordering(612) 00:10:23.003 fused_ordering(613) 00:10:23.003 fused_ordering(614) 00:10:23.003 fused_ordering(615) 00:10:23.568 fused_ordering(616) 00:10:23.568 fused_ordering(617) 00:10:23.568 fused_ordering(618) 00:10:23.568 fused_ordering(619) 00:10:23.568 fused_ordering(620) 00:10:23.568 fused_ordering(621) 00:10:23.568 fused_ordering(622) 00:10:23.568 fused_ordering(623) 00:10:23.568 fused_ordering(624) 00:10:23.568 fused_ordering(625) 00:10:23.568 fused_ordering(626) 00:10:23.568 fused_ordering(627) 00:10:23.568 fused_ordering(628) 00:10:23.568 fused_ordering(629) 00:10:23.568 fused_ordering(630) 00:10:23.568 fused_ordering(631) 00:10:23.568 fused_ordering(632) 00:10:23.568 fused_ordering(633) 00:10:23.568 fused_ordering(634) 00:10:23.568 fused_ordering(635) 00:10:23.568 fused_ordering(636) 00:10:23.568 fused_ordering(637) 00:10:23.568 fused_ordering(638) 00:10:23.568 fused_ordering(639) 00:10:23.568 fused_ordering(640) 00:10:23.568 fused_ordering(641) 00:10:23.568 fused_ordering(642) 00:10:23.568 fused_ordering(643) 00:10:23.568 fused_ordering(644) 00:10:23.568 fused_ordering(645) 00:10:23.568 fused_ordering(646) 00:10:23.568 fused_ordering(647) 00:10:23.568 fused_ordering(648) 00:10:23.568 fused_ordering(649) 00:10:23.568 fused_ordering(650) 00:10:23.568 fused_ordering(651) 00:10:23.568 fused_ordering(652) 00:10:23.568 fused_ordering(653) 00:10:23.568 fused_ordering(654) 00:10:23.568 fused_ordering(655) 00:10:23.568 fused_ordering(656) 00:10:23.568 fused_ordering(657) 00:10:23.568 fused_ordering(658) 00:10:23.568 fused_ordering(659) 00:10:23.568 fused_ordering(660) 00:10:23.568 fused_ordering(661) 00:10:23.568 fused_ordering(662) 00:10:23.568 fused_ordering(663) 00:10:23.568 fused_ordering(664) 00:10:23.568 fused_ordering(665) 00:10:23.568 fused_ordering(666) 00:10:23.568 fused_ordering(667) 00:10:23.568 fused_ordering(668) 00:10:23.568 fused_ordering(669) 00:10:23.568 fused_ordering(670) 00:10:23.568 fused_ordering(671) 00:10:23.568 fused_ordering(672) 00:10:23.568 fused_ordering(673) 00:10:23.568 fused_ordering(674) 00:10:23.568 fused_ordering(675) 00:10:23.568 fused_ordering(676) 00:10:23.568 fused_ordering(677) 00:10:23.568 fused_ordering(678) 00:10:23.568 fused_ordering(679) 00:10:23.568 fused_ordering(680) 00:10:23.568 fused_ordering(681) 00:10:23.568 fused_ordering(682) 00:10:23.568 fused_ordering(683) 00:10:23.568 fused_ordering(684) 00:10:23.568 fused_ordering(685) 00:10:23.568 fused_ordering(686) 00:10:23.568 fused_ordering(687) 00:10:23.568 fused_ordering(688) 00:10:23.568 fused_ordering(689) 00:10:23.568 fused_ordering(690) 00:10:23.568 fused_ordering(691) 00:10:23.568 fused_ordering(692) 00:10:23.568 fused_ordering(693) 00:10:23.568 fused_ordering(694) 00:10:23.568 fused_ordering(695) 00:10:23.568 fused_ordering(696) 00:10:23.568 fused_ordering(697) 00:10:23.568 fused_ordering(698) 00:10:23.568 fused_ordering(699) 00:10:23.568 fused_ordering(700) 00:10:23.568 fused_ordering(701) 00:10:23.568 fused_ordering(702) 00:10:23.568 fused_ordering(703) 00:10:23.568 fused_ordering(704) 00:10:23.568 fused_ordering(705) 00:10:23.568 fused_ordering(706) 00:10:23.568 fused_ordering(707) 00:10:23.568 fused_ordering(708) 00:10:23.568 fused_ordering(709) 00:10:23.568 fused_ordering(710) 00:10:23.568 fused_ordering(711) 00:10:23.568 fused_ordering(712) 00:10:23.569 fused_ordering(713) 00:10:23.569 fused_ordering(714) 00:10:23.569 fused_ordering(715) 00:10:23.569 fused_ordering(716) 00:10:23.569 fused_ordering(717) 00:10:23.569 fused_ordering(718) 00:10:23.569 fused_ordering(719) 00:10:23.569 fused_ordering(720) 00:10:23.569 fused_ordering(721) 00:10:23.569 fused_ordering(722) 00:10:23.569 fused_ordering(723) 00:10:23.569 fused_ordering(724) 00:10:23.569 fused_ordering(725) 00:10:23.569 fused_ordering(726) 00:10:23.569 fused_ordering(727) 00:10:23.569 fused_ordering(728) 00:10:23.569 fused_ordering(729) 00:10:23.569 fused_ordering(730) 00:10:23.569 fused_ordering(731) 00:10:23.569 fused_ordering(732) 00:10:23.569 fused_ordering(733) 00:10:23.569 fused_ordering(734) 00:10:23.569 fused_ordering(735) 00:10:23.569 fused_ordering(736) 00:10:23.569 fused_ordering(737) 00:10:23.569 fused_ordering(738) 00:10:23.569 fused_ordering(739) 00:10:23.569 fused_ordering(740) 00:10:23.569 fused_ordering(741) 00:10:23.569 fused_ordering(742) 00:10:23.569 fused_ordering(743) 00:10:23.569 fused_ordering(744) 00:10:23.569 fused_ordering(745) 00:10:23.569 fused_ordering(746) 00:10:23.569 fused_ordering(747) 00:10:23.569 fused_ordering(748) 00:10:23.569 fused_ordering(749) 00:10:23.569 fused_ordering(750) 00:10:23.569 fused_ordering(751) 00:10:23.569 fused_ordering(752) 00:10:23.569 fused_ordering(753) 00:10:23.569 fused_ordering(754) 00:10:23.569 fused_ordering(755) 00:10:23.569 fused_ordering(756) 00:10:23.569 fused_ordering(757) 00:10:23.569 fused_ordering(758) 00:10:23.569 fused_ordering(759) 00:10:23.569 fused_ordering(760) 00:10:23.569 fused_ordering(761) 00:10:23.569 fused_ordering(762) 00:10:23.569 fused_ordering(763) 00:10:23.569 fused_ordering(764) 00:10:23.569 fused_ordering(765) 00:10:23.569 fused_ordering(766) 00:10:23.569 fused_ordering(767) 00:10:23.569 fused_ordering(768) 00:10:23.569 fused_ordering(769) 00:10:23.569 fused_ordering(770) 00:10:23.569 fused_ordering(771) 00:10:23.569 fused_ordering(772) 00:10:23.569 fused_ordering(773) 00:10:23.569 fused_ordering(774) 00:10:23.569 fused_ordering(775) 00:10:23.569 fused_ordering(776) 00:10:23.569 fused_ordering(777) 00:10:23.569 fused_ordering(778) 00:10:23.569 fused_ordering(779) 00:10:23.569 fused_ordering(780) 00:10:23.569 fused_ordering(781) 00:10:23.569 fused_ordering(782) 00:10:23.569 fused_ordering(783) 00:10:23.569 fused_ordering(784) 00:10:23.569 fused_ordering(785) 00:10:23.569 fused_ordering(786) 00:10:23.569 fused_ordering(787) 00:10:23.569 fused_ordering(788) 00:10:23.569 fused_ordering(789) 00:10:23.569 fused_ordering(790) 00:10:23.569 fused_ordering(791) 00:10:23.569 fused_ordering(792) 00:10:23.569 fused_ordering(793) 00:10:23.569 fused_ordering(794) 00:10:23.569 fused_ordering(795) 00:10:23.569 fused_ordering(796) 00:10:23.569 fused_ordering(797) 00:10:23.569 fused_ordering(798) 00:10:23.569 fused_ordering(799) 00:10:23.569 fused_ordering(800) 00:10:23.569 fused_ordering(801) 00:10:23.569 fused_ordering(802) 00:10:23.569 fused_ordering(803) 00:10:23.569 fused_ordering(804) 00:10:23.569 fused_ordering(805) 00:10:23.569 fused_ordering(806) 00:10:23.569 fused_ordering(807) 00:10:23.569 fused_ordering(808) 00:10:23.569 fused_ordering(809) 00:10:23.569 fused_ordering(810) 00:10:23.569 fused_ordering(811) 00:10:23.569 fused_ordering(812) 00:10:23.569 fused_ordering(813) 00:10:23.569 fused_ordering(814) 00:10:23.569 fused_ordering(815) 00:10:23.569 fused_ordering(816) 00:10:23.569 fused_ordering(817) 00:10:23.569 fused_ordering(818) 00:10:23.569 fused_ordering(819) 00:10:23.569 fused_ordering(820) 00:10:24.136 fused_ordering(821) 00:10:24.136 fused_ordering(822) 00:10:24.136 fused_ordering(823) 00:10:24.136 fused_ordering(824) 00:10:24.136 fused_ordering(825) 00:10:24.136 fused_ordering(826) 00:10:24.136 fused_ordering(827) 00:10:24.136 fused_ordering(828) 00:10:24.136 fused_ordering(829) 00:10:24.136 fused_ordering(830) 00:10:24.136 fused_ordering(831) 00:10:24.136 fused_ordering(832) 00:10:24.136 fused_ordering(833) 00:10:24.136 fused_ordering(834) 00:10:24.136 fused_ordering(835) 00:10:24.136 fused_ordering(836) 00:10:24.136 fused_ordering(837) 00:10:24.136 fused_ordering(838) 00:10:24.136 fused_ordering(839) 00:10:24.136 fused_ordering(840) 00:10:24.136 fused_ordering(841) 00:10:24.136 fused_ordering(842) 00:10:24.136 fused_ordering(843) 00:10:24.136 fused_ordering(844) 00:10:24.136 fused_ordering(845) 00:10:24.136 fused_ordering(846) 00:10:24.136 fused_ordering(847) 00:10:24.136 fused_ordering(848) 00:10:24.136 fused_ordering(849) 00:10:24.136 fused_ordering(850) 00:10:24.136 fused_ordering(851) 00:10:24.136 fused_ordering(852) 00:10:24.136 fused_ordering(853) 00:10:24.136 fused_ordering(854) 00:10:24.136 fused_ordering(855) 00:10:24.136 fused_ordering(856) 00:10:24.136 fused_ordering(857) 00:10:24.136 fused_ordering(858) 00:10:24.136 fused_ordering(859) 00:10:24.136 fused_ordering(860) 00:10:24.136 fused_ordering(861) 00:10:24.136 fused_ordering(862) 00:10:24.136 fused_ordering(863) 00:10:24.136 fused_ordering(864) 00:10:24.136 fused_ordering(865) 00:10:24.136 fused_ordering(866) 00:10:24.136 fused_ordering(867) 00:10:24.136 fused_ordering(868) 00:10:24.136 fused_ordering(869) 00:10:24.136 fused_ordering(870) 00:10:24.136 fused_ordering(871) 00:10:24.136 fused_ordering(872) 00:10:24.136 fused_ordering(873) 00:10:24.136 fused_ordering(874) 00:10:24.136 fused_ordering(875) 00:10:24.136 fused_ordering(876) 00:10:24.136 fused_ordering(877) 00:10:24.136 fused_ordering(878) 00:10:24.136 fused_ordering(879) 00:10:24.136 fused_ordering(880) 00:10:24.136 fused_ordering(881) 00:10:24.136 fused_ordering(882) 00:10:24.136 fused_ordering(883) 00:10:24.136 fused_ordering(884) 00:10:24.136 fused_ordering(885) 00:10:24.136 fused_ordering(886) 00:10:24.136 fused_ordering(887) 00:10:24.136 fused_ordering(888) 00:10:24.137 fused_ordering(889) 00:10:24.137 fused_ordering(890) 00:10:24.137 fused_ordering(891) 00:10:24.137 fused_ordering(892) 00:10:24.137 fused_ordering(893) 00:10:24.137 fused_ordering(894) 00:10:24.137 fused_ordering(895) 00:10:24.137 fused_ordering(896) 00:10:24.137 fused_ordering(897) 00:10:24.137 fused_ordering(898) 00:10:24.137 fused_ordering(899) 00:10:24.137 fused_ordering(900) 00:10:24.137 fused_ordering(901) 00:10:24.137 fused_ordering(902) 00:10:24.137 fused_ordering(903) 00:10:24.137 fused_ordering(904) 00:10:24.137 fused_ordering(905) 00:10:24.137 fused_ordering(906) 00:10:24.137 fused_ordering(907) 00:10:24.137 fused_ordering(908) 00:10:24.137 fused_ordering(909) 00:10:24.137 fused_ordering(910) 00:10:24.137 fused_ordering(911) 00:10:24.137 fused_ordering(912) 00:10:24.137 fused_ordering(913) 00:10:24.137 fused_ordering(914) 00:10:24.137 fused_ordering(915) 00:10:24.137 fused_ordering(916) 00:10:24.137 fused_ordering(917) 00:10:24.137 fused_ordering(918) 00:10:24.137 fused_ordering(919) 00:10:24.137 fused_ordering(920) 00:10:24.137 fused_ordering(921) 00:10:24.137 fused_ordering(922) 00:10:24.137 fused_ordering(923) 00:10:24.137 fused_ordering(924) 00:10:24.137 fused_ordering(925) 00:10:24.137 fused_ordering(926) 00:10:24.137 fused_ordering(927) 00:10:24.137 fused_ordering(928) 00:10:24.137 fused_ordering(929) 00:10:24.137 fused_ordering(930) 00:10:24.137 fused_ordering(931) 00:10:24.137 fused_ordering(932) 00:10:24.137 fused_ordering(933) 00:10:24.137 fused_ordering(934) 00:10:24.137 fused_ordering(935) 00:10:24.137 fused_ordering(936) 00:10:24.137 fused_ordering(937) 00:10:24.137 fused_ordering(938) 00:10:24.137 fused_ordering(939) 00:10:24.137 fused_ordering(940) 00:10:24.137 fused_ordering(941) 00:10:24.137 fused_ordering(942) 00:10:24.137 fused_ordering(943) 00:10:24.137 fused_ordering(944) 00:10:24.137 fused_ordering(945) 00:10:24.137 fused_ordering(946) 00:10:24.137 fused_ordering(947) 00:10:24.137 fused_ordering(948) 00:10:24.137 fused_ordering(949) 00:10:24.137 fused_ordering(950) 00:10:24.137 fused_ordering(951) 00:10:24.137 fused_ordering(952) 00:10:24.137 fused_ordering(953) 00:10:24.137 fused_ordering(954) 00:10:24.137 fused_ordering(955) 00:10:24.137 fused_ordering(956) 00:10:24.137 fused_ordering(957) 00:10:24.137 fused_ordering(958) 00:10:24.137 fused_ordering(959) 00:10:24.137 fused_ordering(960) 00:10:24.137 fused_ordering(961) 00:10:24.137 fused_ordering(962) 00:10:24.137 fused_ordering(963) 00:10:24.137 fused_ordering(964) 00:10:24.137 fused_ordering(965) 00:10:24.137 fused_ordering(966) 00:10:24.137 fused_ordering(967) 00:10:24.137 fused_ordering(968) 00:10:24.137 fused_ordering(969) 00:10:24.137 fused_ordering(970) 00:10:24.137 fused_ordering(971) 00:10:24.137 fused_ordering(972) 00:10:24.137 fused_ordering(973) 00:10:24.137 fused_ordering(974) 00:10:24.137 fused_ordering(975) 00:10:24.137 fused_ordering(976) 00:10:24.137 fused_ordering(977) 00:10:24.137 fused_ordering(978) 00:10:24.137 fused_ordering(979) 00:10:24.137 fused_ordering(980) 00:10:24.137 fused_ordering(981) 00:10:24.137 fused_ordering(982) 00:10:24.137 fused_ordering(983) 00:10:24.137 fused_ordering(984) 00:10:24.137 fused_ordering(985) 00:10:24.137 fused_ordering(986) 00:10:24.137 fused_ordering(987) 00:10:24.137 fused_ordering(988) 00:10:24.137 fused_ordering(989) 00:10:24.137 fused_ordering(990) 00:10:24.137 fused_ordering(991) 00:10:24.137 fused_ordering(992) 00:10:24.137 fused_ordering(993) 00:10:24.137 fused_ordering(994) 00:10:24.137 fused_ordering(995) 00:10:24.137 fused_ordering(996) 00:10:24.137 fused_ordering(997) 00:10:24.137 fused_ordering(998) 00:10:24.137 fused_ordering(999) 00:10:24.137 fused_ordering(1000) 00:10:24.137 fused_ordering(1001) 00:10:24.137 fused_ordering(1002) 00:10:24.137 fused_ordering(1003) 00:10:24.137 fused_ordering(1004) 00:10:24.137 fused_ordering(1005) 00:10:24.137 fused_ordering(1006) 00:10:24.137 fused_ordering(1007) 00:10:24.137 fused_ordering(1008) 00:10:24.137 fused_ordering(1009) 00:10:24.137 fused_ordering(1010) 00:10:24.137 fused_ordering(1011) 00:10:24.137 fused_ordering(1012) 00:10:24.137 fused_ordering(1013) 00:10:24.137 fused_ordering(1014) 00:10:24.137 fused_ordering(1015) 00:10:24.137 fused_ordering(1016) 00:10:24.137 fused_ordering(1017) 00:10:24.137 fused_ordering(1018) 00:10:24.137 fused_ordering(1019) 00:10:24.137 fused_ordering(1020) 00:10:24.137 fused_ordering(1021) 00:10:24.137 fused_ordering(1022) 00:10:24.137 fused_ordering(1023) 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.137 rmmod nvme_tcp 00:10:24.137 rmmod nvme_fabrics 00:10:24.137 rmmod nvme_keyring 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3599803 ']' 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3599803 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3599803 ']' 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3599803 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3599803 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3599803' 00:10:24.137 killing process with pid 3599803 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3599803 00:10:24.137 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3599803 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.398 21:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.300 21:48:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.578 00:10:26.578 real 0m10.482s 00:10:26.578 user 0m5.420s 00:10:26.578 sys 0m5.448s 00:10:26.578 21:48:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.578 21:48:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.578 ************************************ 00:10:26.578 END TEST nvmf_fused_ordering 00:10:26.578 ************************************ 00:10:26.578 21:48:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:26.578 21:48:20 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:26.578 21:48:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:26.578 21:48:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.578 21:48:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.578 ************************************ 00:10:26.578 START TEST nvmf_delete_subsystem 00:10:26.578 ************************************ 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:26.578 * Looking for test storage... 00:10:26.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:26.578 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.579 21:48:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.839 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.839 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.839 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:31.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:10:31.840 00:10:31.840 --- 10.0.0.2 ping statistics --- 00:10:31.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.840 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:10:31.840 00:10:31.840 --- 10.0.0.1 ping statistics --- 00:10:31.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.840 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3603792 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3603792 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3603792 ']' 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.840 21:48:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.840 [2024-07-15 21:48:26.030611] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:31.840 [2024-07-15 21:48:26.030655] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.840 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.097 [2024-07-15 21:48:26.088161] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.097 [2024-07-15 21:48:26.171249] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.097 [2024-07-15 21:48:26.171299] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.097 [2024-07-15 21:48:26.171306] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.097 [2024-07-15 21:48:26.171312] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.097 [2024-07-15 21:48:26.171317] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.097 [2024-07-15 21:48:26.171360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.097 [2024-07-15 21:48:26.171363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.662 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.662 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:32.662 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.662 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 [2024-07-15 21:48:26.864975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 [2024-07-15 21:48:26.885126] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 NULL1 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.663 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.920 Delay0 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3604040 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:32.920 21:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:32.920 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.920 [2024-07-15 21:48:26.965772] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:34.819 21:48:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.819 21:48:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.819 21:48:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 [2024-07-15 21:48:29.085887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725af0 is same with the state(5) to be set 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Write completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 starting I/O failed: -6 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.077 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 starting I/O failed: -6 00:10:35.078 [2024-07-15 21:48:29.086631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc364000c00 is same with the state(5) to be set 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Read completed with error (sct=0, sc=8) 00:10:35.078 Write completed with error (sct=0, sc=8) 00:10:35.078 [2024-07-15 21:48:29.087064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc36400d020 is same with the state(5) to be set 00:10:36.010 [2024-07-15 21:48:30.059633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x726a70 is same with the state(5) to be set 00:10:36.010 Write completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Write completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Write completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Write completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Write completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.010 Read completed with error (sct=0, sc=8) 00:10:36.011 [2024-07-15 21:48:30.088674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc36400d370 is same with the state(5) to be set 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 [2024-07-15 21:48:30.089264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7257a0 is same with the state(5) to be set 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 [2024-07-15 21:48:30.089975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725390 is same with the state(5) to be set 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Write completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 Read completed with error (sct=0, sc=8) 00:10:36.011 [2024-07-15 21:48:30.090128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725e40 is same with the state(5) to be set 00:10:36.011 Initializing NVMe Controllers 00:10:36.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.011 Controller IO queue size 128, less than required. 00:10:36.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:36.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:36.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:36.011 Initialization complete. Launching workers. 00:10:36.011 ======================================================== 00:10:36.011 Latency(us) 00:10:36.011 Device Information : IOPS MiB/s Average min max 00:10:36.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.23 0.09 949672.26 744.45 1011439.29 00:10:36.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.47 0.08 880297.23 249.33 1013244.96 00:10:36.011 ======================================================== 00:10:36.011 Total : 343.70 0.17 918493.60 249.33 1013244.96 00:10:36.011 00:10:36.011 [2024-07-15 21:48:30.090774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x726a70 (9): Bad file descriptor 00:10:36.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:36.011 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.011 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:36.011 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3604040 00:10:36.011 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3604040 00:10:36.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3604040) - No such process 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3604040 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3604040 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3604040 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 [2024-07-15 21:48:30.617145] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3604518 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:36.578 21:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.578 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.578 [2024-07-15 21:48:30.686992] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:37.143 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.143 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:37.143 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.401 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.401 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:37.401 21:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.967 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.967 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:37.967 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:38.532 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:38.532 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:38.532 21:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.097 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.097 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:39.097 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.662 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:39.662 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:39.662 21:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:39.662 Initializing NVMe Controllers 00:10:39.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.662 Controller IO queue size 128, less than required. 00:10:39.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:39.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:39.662 Initialization complete. Launching workers. 00:10:39.662 ======================================================== 00:10:39.662 Latency(us) 00:10:39.662 Device Information : IOPS MiB/s Average min max 00:10:39.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003266.30 1000207.13 1040939.27 00:10:39.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005742.03 1000203.15 1042919.27 00:10:39.662 ======================================================== 00:10:39.662 Total : 256.00 0.12 1004504.16 1000203.15 1042919.27 00:10:39.662 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3604518 00:10:40.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3604518) - No such process 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3604518 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.229 rmmod nvme_tcp 00:10:40.229 rmmod nvme_fabrics 00:10:40.229 rmmod nvme_keyring 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3603792 ']' 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3603792 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3603792 ']' 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3603792 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:40.229 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3603792 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3603792' 00:10:40.230 killing process with pid 3603792 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3603792 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3603792 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.230 21:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.809 21:48:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.809 00:10:42.809 real 0m15.907s 00:10:42.809 user 0m30.183s 00:10:42.809 sys 0m4.759s 00:10:42.809 21:48:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.809 21:48:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.809 ************************************ 00:10:42.809 END TEST nvmf_delete_subsystem 00:10:42.809 ************************************ 00:10:42.809 21:48:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:42.809 21:48:36 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:42.809 21:48:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.809 21:48:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.809 21:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.809 ************************************ 00:10:42.809 START TEST nvmf_ns_masking 00:10:42.809 ************************************ 00:10:42.809 21:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:42.809 * Looking for test storage... 00:10:42.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.809 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2ae39734-ba50-4629-bd61-6a2486f42c57 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bc15227f-297b-49c1-8459-8f4b370be03c 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c7d75a56-87ad-4e25-8c30-884b028b2f1f 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.810 21:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:48.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:48.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:48.079 Found net devices under 0000:86:00.0: cvl_0_0 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:48.079 Found net devices under 0000:86:00.1: cvl_0_1 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:10:48.079 00:10:48.079 --- 10.0.0.2 ping statistics --- 00:10:48.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.079 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:10:48.079 00:10:48.079 --- 10.0.0.1 ping statistics --- 00:10:48.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.079 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.079 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3608504 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3608504 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3608504 ']' 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:48.080 21:48:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:48.080 [2024-07-15 21:48:41.651644] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:48.080 [2024-07-15 21:48:41.651685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.080 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.080 [2024-07-15 21:48:41.709122] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.080 [2024-07-15 21:48:41.788798] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.080 [2024-07-15 21:48:41.788832] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.080 [2024-07-15 21:48:41.788840] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.080 [2024-07-15 21:48:41.788845] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.080 [2024-07-15 21:48:41.788850] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.080 [2024-07-15 21:48:41.788867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.337 21:48:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.594 [2024-07-15 21:48:42.637305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.594 21:48:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:48.594 21:48:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:48.594 21:48:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:48.594 Malloc1 00:10:48.852 21:48:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:48.852 Malloc2 00:10:48.852 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.110 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:49.385 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.385 [2024-07-15 21:48:43.529165] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.385 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:49.385 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7d75a56-87ad-4e25-8c30-884b028b2f1f -a 10.0.0.2 -s 4420 -i 4 00:10:49.643 21:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.643 21:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.643 21:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.643 21:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.643 21:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:51.540 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:51.799 [ 0]:0x1 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09790917b86a4812ac39c408154cd65e 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09790917b86a4812ac39c408154cd65e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.799 21:48:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:51.799 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:51.799 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.799 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:51.799 [ 0]:0x1 00:10:51.799 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:51.799 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09790917b86a4812ac39c408154cd65e 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09790917b86a4812ac39c408154cd65e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:52.057 [ 1]:0x2 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.057 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.315 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:52.315 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:52.315 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7d75a56-87ad-4e25-8c30-884b028b2f1f -a 10.0.0.2 -s 4420 -i 4 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:52.573 21:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.104 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.105 21:48:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:55.105 [ 0]:0x2 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:55.105 [ 0]:0x1 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09790917b86a4812ac39c408154cd65e 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09790917b86a4812ac39c408154cd65e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:55.105 [ 1]:0x2 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.105 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:55.363 [ 0]:0x2 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:55.363 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.620 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:55.620 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:55.620 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7d75a56-87ad-4e25-8c30-884b028b2f1f -a 10.0.0.2 -s 4420 -i 4 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:55.877 21:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:57.772 21:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:57.772 [ 0]:0x1 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:57.772 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09790917b86a4812ac39c408154cd65e 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09790917b86a4812ac39c408154cd65e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:58.030 [ 1]:0x2 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.030 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:58.288 [ 0]:0x2 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:58.288 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:58.546 [2024-07-15 21:48:52.530788] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:58.546 request: 00:10:58.546 { 00:10:58.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.546 "nsid": 2, 00:10:58.546 "host": "nqn.2016-06.io.spdk:host1", 00:10:58.546 "method": "nvmf_ns_remove_host", 00:10:58.546 "req_id": 1 00:10:58.546 } 00:10:58.546 Got JSON-RPC error response 00:10:58.546 response: 00:10:58.546 { 00:10:58.546 "code": -32602, 00:10:58.546 "message": "Invalid parameters" 00:10:58.546 } 00:10:58.546 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:58.546 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:58.546 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:58.546 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:58.547 [ 0]:0x2 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2f93ebe158a94b86857333c758d14d6b 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2f93ebe158a94b86857333c758d14d6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:58.547 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3610506 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3610506 /var/tmp/host.sock 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3610506 ']' 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:58.804 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.805 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:58.805 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.805 21:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.805 [2024-07-15 21:48:52.885566] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:10:58.805 [2024-07-15 21:48:52.885613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610506 ] 00:10:58.805 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.805 [2024-07-15 21:48:52.940647] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.805 [2024-07-15 21:48:53.013486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.737 21:48:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.737 21:48:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:59.737 21:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.737 21:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2ae39734-ba50-4629-bd61-6a2486f42c57 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2AE39734BA504629BD616A2486F42C57 -i 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bc15227f-297b-49c1-8459-8f4b370be03c 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:59.995 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BC15227F297B49C184598F4B370BE03C -i 00:11:00.252 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:00.509 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:00.509 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:00.509 21:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:00.767 nvme0n1 00:11:01.024 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:01.024 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:01.282 nvme1n2 00:11:01.282 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:01.282 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:01.282 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:01.282 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:01.282 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2ae39734-ba50-4629-bd61-6a2486f42c57 == \2\a\e\3\9\7\3\4\-\b\a\5\0\-\4\6\2\9\-\b\d\6\1\-\6\a\2\4\8\6\f\4\2\c\5\7 ]] 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:01.540 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bc15227f-297b-49c1-8459-8f4b370be03c == \b\c\1\5\2\2\7\f\-\2\9\7\b\-\4\9\c\1\-\8\4\5\9\-\8\f\4\b\3\7\0\b\e\0\3\c ]] 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3610506 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3610506 ']' 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3610506 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3610506 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3610506' 00:11:01.799 killing process with pid 3610506 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3610506 00:11:01.799 21:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3610506 00:11:02.057 21:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.315 rmmod nvme_tcp 00:11:02.315 rmmod nvme_fabrics 00:11:02.315 rmmod nvme_keyring 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3608504 ']' 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3608504 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3608504 ']' 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3608504 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3608504 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3608504' 00:11:02.315 killing process with pid 3608504 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3608504 00:11:02.315 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3608504 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.573 21:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.154 21:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.154 00:11:05.154 real 0m22.221s 00:11:05.154 user 0m24.125s 00:11:05.154 sys 0m5.800s 00:11:05.154 21:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.154 21:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:05.154 ************************************ 00:11:05.154 END TEST nvmf_ns_masking 00:11:05.154 ************************************ 00:11:05.154 21:48:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:05.154 21:48:58 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:05.154 21:48:58 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:05.154 21:48:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.154 21:48:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.154 21:48:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.154 ************************************ 00:11:05.154 START TEST nvmf_nvme_cli 00:11:05.154 ************************************ 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:05.154 * Looking for test storage... 00:11:05.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.154 21:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.154 21:48:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.154 21:48:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:10.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:10.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:10.429 Found net devices under 0000:86:00.0: cvl_0_0 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:10.429 Found net devices under 0000:86:00.1: cvl_0_1 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:11:10.429 00:11:10.429 --- 10.0.0.2 ping statistics --- 00:11:10.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.429 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:11:10.429 00:11:10.429 --- 10.0.0.1 ping statistics --- 00:11:10.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.429 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.429 21:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3614740 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3614740 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3614740 ']' 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.430 21:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:10.689 [2024-07-15 21:49:04.703453] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:11:10.689 [2024-07-15 21:49:04.703497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.689 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.689 [2024-07-15 21:49:04.761440] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.689 [2024-07-15 21:49:04.835399] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.689 [2024-07-15 21:49:04.835439] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.689 [2024-07-15 21:49:04.835446] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.689 [2024-07-15 21:49:04.835453] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.689 [2024-07-15 21:49:04.835458] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.689 [2024-07-15 21:49:04.835567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.689 [2024-07-15 21:49:04.835667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.689 [2024-07-15 21:49:04.835755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.689 [2024-07-15 21:49:04.835756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 [2024-07-15 21:49:05.552146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 Malloc0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 Malloc1 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 [2024-07-15 21:49:05.629587] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:11.626 00:11:11.626 Discovery Log Number of Records 2, Generation counter 2 00:11:11.626 =====Discovery Log Entry 0====== 00:11:11.626 trtype: tcp 00:11:11.626 adrfam: ipv4 00:11:11.626 subtype: current discovery subsystem 00:11:11.626 treq: not required 00:11:11.626 portid: 0 00:11:11.626 trsvcid: 4420 00:11:11.626 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:11.626 traddr: 10.0.0.2 00:11:11.626 eflags: explicit discovery connections, duplicate discovery information 00:11:11.626 sectype: none 00:11:11.626 =====Discovery Log Entry 1====== 00:11:11.626 trtype: tcp 00:11:11.626 adrfam: ipv4 00:11:11.626 subtype: nvme subsystem 00:11:11.626 treq: not required 00:11:11.626 portid: 0 00:11:11.626 trsvcid: 4420 00:11:11.626 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:11.626 traddr: 10.0.0.2 00:11:11.626 eflags: none 00:11:11.626 sectype: none 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:11.626 21:49:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:13.003 21:49:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:14.906 /dev/nvme0n1 ]] 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:14.906 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.907 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.907 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.166 rmmod nvme_tcp 00:11:15.166 rmmod nvme_fabrics 00:11:15.166 rmmod nvme_keyring 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3614740 ']' 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3614740 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3614740 ']' 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3614740 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3614740 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3614740' 00:11:15.166 killing process with pid 3614740 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3614740 00:11:15.166 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3614740 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.426 21:49:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.334 21:49:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.593 00:11:17.593 real 0m12.702s 00:11:17.593 user 0m20.138s 00:11:17.593 sys 0m4.891s 00:11:17.593 21:49:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.593 21:49:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.593 ************************************ 00:11:17.593 END TEST nvmf_nvme_cli 00:11:17.593 ************************************ 00:11:17.593 21:49:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.593 21:49:11 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:17.593 21:49:11 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:17.593 21:49:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.593 21:49:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.593 21:49:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.593 ************************************ 00:11:17.593 START TEST nvmf_vfio_user 00:11:17.593 ************************************ 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:17.593 * Looking for test storage... 00:11:17.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.593 21:49:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3616029 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3616029' 00:11:17.594 Process pid: 3616029 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3616029 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3616029 ']' 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.594 21:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:17.594 [2024-07-15 21:49:11.804987] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:11:17.594 [2024-07-15 21:49:11.805031] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.594 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.853 [2024-07-15 21:49:11.860814] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.853 [2024-07-15 21:49:11.933639] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.853 [2024-07-15 21:49:11.933680] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.853 [2024-07-15 21:49:11.933687] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.853 [2024-07-15 21:49:11.933693] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.853 [2024-07-15 21:49:11.933698] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.853 [2024-07-15 21:49:11.933788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.853 [2024-07-15 21:49:11.933902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.853 [2024-07-15 21:49:11.933967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.853 [2024-07-15 21:49:11.933968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.421 21:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.421 21:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:18.421 21:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:19.797 21:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:19.797 Malloc1 00:11:19.797 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:20.056 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:20.314 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:20.573 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:20.573 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:20.573 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:20.573 Malloc2 00:11:20.573 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:20.832 21:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:21.089 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:21.089 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:21.090 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:21.350 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:21.350 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:21.350 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:21.350 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:21.350 [2024-07-15 21:49:15.357019] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:11:21.350 [2024-07-15 21:49:15.357054] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616719 ] 00:11:21.350 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.350 [2024-07-15 21:49:15.387699] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:21.350 [2024-07-15 21:49:15.396548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:21.350 [2024-07-15 21:49:15.396565] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd6a8c3e000 00:11:21.350 [2024-07-15 21:49:15.397550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.398551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.399560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.400566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.401570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.402573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.403581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.404586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:21.350 [2024-07-15 21:49:15.405595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:21.350 [2024-07-15 21:49:15.405604] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd6a8c33000 00:11:21.350 [2024-07-15 21:49:15.406665] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:21.350 [2024-07-15 21:49:15.419993] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:21.350 [2024-07-15 21:49:15.420014] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:21.350 [2024-07-15 21:49:15.424706] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:21.350 [2024-07-15 21:49:15.424749] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:21.350 [2024-07-15 21:49:15.424821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:21.350 [2024-07-15 21:49:15.424837] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:21.350 [2024-07-15 21:49:15.424842] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:21.350 [2024-07-15 21:49:15.425700] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:21.350 [2024-07-15 21:49:15.425710] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:21.350 [2024-07-15 21:49:15.425717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:21.350 [2024-07-15 21:49:15.426706] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:21.350 [2024-07-15 21:49:15.426713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:21.350 [2024-07-15 21:49:15.426720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.427712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:21.350 [2024-07-15 21:49:15.427719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.428717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:21.350 [2024-07-15 21:49:15.428726] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:21.350 [2024-07-15 21:49:15.428730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.428736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.428841] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:21.350 [2024-07-15 21:49:15.428845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.428850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:21.350 [2024-07-15 21:49:15.429721] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:21.350 [2024-07-15 21:49:15.430731] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:21.350 [2024-07-15 21:49:15.431737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:21.350 [2024-07-15 21:49:15.432738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:21.350 [2024-07-15 21:49:15.432811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:21.350 [2024-07-15 21:49:15.433747] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:21.350 [2024-07-15 21:49:15.433754] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:21.350 [2024-07-15 21:49:15.433759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:21.350 [2024-07-15 21:49:15.433775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:21.350 [2024-07-15 21:49:15.433782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:21.350 [2024-07-15 21:49:15.433799] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:21.350 [2024-07-15 21:49:15.433804] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:21.350 [2024-07-15 21:49:15.433816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:21.350 [2024-07-15 21:49:15.433863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:21.350 [2024-07-15 21:49:15.433871] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:21.350 [2024-07-15 21:49:15.433876] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:21.350 [2024-07-15 21:49:15.433880] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:21.350 [2024-07-15 21:49:15.433883] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:21.350 [2024-07-15 21:49:15.433891] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:21.350 [2024-07-15 21:49:15.433895] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:21.350 [2024-07-15 21:49:15.433899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:21.350 [2024-07-15 21:49:15.433906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:21.350 [2024-07-15 21:49:15.433915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:21.350 [2024-07-15 21:49:15.433929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:21.350 [2024-07-15 21:49:15.433940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.350 [2024-07-15 21:49:15.433948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.351 [2024-07-15 21:49:15.433955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.351 [2024-07-15 21:49:15.433962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.351 [2024-07-15 21:49:15.433967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.433975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.433983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.433996] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:21.351 [2024-07-15 21:49:15.434000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434098] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:21.351 [2024-07-15 21:49:15.434102] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:21.351 [2024-07-15 21:49:15.434108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434132] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:21.351 [2024-07-15 21:49:15.434142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434155] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:21.351 [2024-07-15 21:49:15.434159] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:21.351 [2024-07-15 21:49:15.434164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434204] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:21.351 [2024-07-15 21:49:15.434208] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:21.351 [2024-07-15 21:49:15.434214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434268] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:21.351 [2024-07-15 21:49:15.434272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:21.351 [2024-07-15 21:49:15.434277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:21.351 [2024-07-15 21:49:15.434292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434373] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:21.351 [2024-07-15 21:49:15.434378] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:21.351 [2024-07-15 21:49:15.434381] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:21.351 [2024-07-15 21:49:15.434384] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:21.351 [2024-07-15 21:49:15.434389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:21.351 [2024-07-15 21:49:15.434396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:21.351 [2024-07-15 21:49:15.434399] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:21.351 [2024-07-15 21:49:15.434405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434411] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:21.351 [2024-07-15 21:49:15.434415] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:21.351 [2024-07-15 21:49:15.434420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434427] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:21.351 [2024-07-15 21:49:15.434430] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:21.351 [2024-07-15 21:49:15.434436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:21.351 [2024-07-15 21:49:15.434442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:21.351 [2024-07-15 21:49:15.434470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:21.351 ===================================================== 00:11:21.351 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:21.351 ===================================================== 00:11:21.351 Controller Capabilities/Features 00:11:21.351 ================================ 00:11:21.351 Vendor ID: 4e58 00:11:21.351 Subsystem Vendor ID: 4e58 00:11:21.351 Serial Number: SPDK1 00:11:21.351 Model Number: SPDK bdev Controller 00:11:21.351 Firmware Version: 24.09 00:11:21.351 Recommended Arb Burst: 6 00:11:21.351 IEEE OUI Identifier: 8d 6b 50 00:11:21.351 Multi-path I/O 00:11:21.351 May have multiple subsystem ports: Yes 00:11:21.351 May have multiple controllers: Yes 00:11:21.351 Associated with SR-IOV VF: No 00:11:21.351 Max Data Transfer Size: 131072 00:11:21.351 Max Number of Namespaces: 32 00:11:21.351 Max Number of I/O Queues: 127 00:11:21.351 NVMe Specification Version (VS): 1.3 00:11:21.351 NVMe Specification Version (Identify): 1.3 00:11:21.351 Maximum Queue Entries: 256 00:11:21.351 Contiguous Queues Required: Yes 00:11:21.351 Arbitration Mechanisms Supported 00:11:21.351 Weighted Round Robin: Not Supported 00:11:21.351 Vendor Specific: Not Supported 00:11:21.351 Reset Timeout: 15000 ms 00:11:21.351 Doorbell Stride: 4 bytes 00:11:21.351 NVM Subsystem Reset: Not Supported 00:11:21.351 Command Sets Supported 00:11:21.351 NVM Command Set: Supported 00:11:21.351 Boot Partition: Not Supported 00:11:21.351 Memory Page Size Minimum: 4096 bytes 00:11:21.351 Memory Page Size Maximum: 4096 bytes 00:11:21.351 Persistent Memory Region: Not Supported 00:11:21.351 Optional Asynchronous Events Supported 00:11:21.351 Namespace Attribute Notices: Supported 00:11:21.351 Firmware Activation Notices: Not Supported 00:11:21.351 ANA Change Notices: Not Supported 00:11:21.351 PLE Aggregate Log Change Notices: Not Supported 00:11:21.351 LBA Status Info Alert Notices: Not Supported 00:11:21.351 EGE Aggregate Log Change Notices: Not Supported 00:11:21.351 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.351 Zone Descriptor Change Notices: Not Supported 00:11:21.351 Discovery Log Change Notices: Not Supported 00:11:21.351 Controller Attributes 00:11:21.351 128-bit Host Identifier: Supported 00:11:21.351 Non-Operational Permissive Mode: Not Supported 00:11:21.351 NVM Sets: Not Supported 00:11:21.351 Read Recovery Levels: Not Supported 00:11:21.351 Endurance Groups: Not Supported 00:11:21.351 Predictable Latency Mode: Not Supported 00:11:21.351 Traffic Based Keep ALive: Not Supported 00:11:21.351 Namespace Granularity: Not Supported 00:11:21.351 SQ Associations: Not Supported 00:11:21.351 UUID List: Not Supported 00:11:21.351 Multi-Domain Subsystem: Not Supported 00:11:21.351 Fixed Capacity Management: Not Supported 00:11:21.351 Variable Capacity Management: Not Supported 00:11:21.351 Delete Endurance Group: Not Supported 00:11:21.351 Delete NVM Set: Not Supported 00:11:21.351 Extended LBA Formats Supported: Not Supported 00:11:21.351 Flexible Data Placement Supported: Not Supported 00:11:21.351 00:11:21.351 Controller Memory Buffer Support 00:11:21.351 ================================ 00:11:21.351 Supported: No 00:11:21.351 00:11:21.351 Persistent Memory Region Support 00:11:21.351 ================================ 00:11:21.351 Supported: No 00:11:21.351 00:11:21.351 Admin Command Set Attributes 00:11:21.351 ============================ 00:11:21.351 Security Send/Receive: Not Supported 00:11:21.351 Format NVM: Not Supported 00:11:21.351 Firmware Activate/Download: Not Supported 00:11:21.351 Namespace Management: Not Supported 00:11:21.351 Device Self-Test: Not Supported 00:11:21.351 Directives: Not Supported 00:11:21.351 NVMe-MI: Not Supported 00:11:21.351 Virtualization Management: Not Supported 00:11:21.351 Doorbell Buffer Config: Not Supported 00:11:21.351 Get LBA Status Capability: Not Supported 00:11:21.351 Command & Feature Lockdown Capability: Not Supported 00:11:21.351 Abort Command Limit: 4 00:11:21.351 Async Event Request Limit: 4 00:11:21.351 Number of Firmware Slots: N/A 00:11:21.351 Firmware Slot 1 Read-Only: N/A 00:11:21.351 Firmware Activation Without Reset: N/A 00:11:21.351 Multiple Update Detection Support: N/A 00:11:21.351 Firmware Update Granularity: No Information Provided 00:11:21.351 Per-Namespace SMART Log: No 00:11:21.351 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.351 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:21.351 Command Effects Log Page: Supported 00:11:21.351 Get Log Page Extended Data: Supported 00:11:21.351 Telemetry Log Pages: Not Supported 00:11:21.351 Persistent Event Log Pages: Not Supported 00:11:21.351 Supported Log Pages Log Page: May Support 00:11:21.351 Commands Supported & Effects Log Page: Not Supported 00:11:21.351 Feature Identifiers & Effects Log Page:May Support 00:11:21.351 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.351 Data Area 4 for Telemetry Log: Not Supported 00:11:21.351 Error Log Page Entries Supported: 128 00:11:21.351 Keep Alive: Supported 00:11:21.351 Keep Alive Granularity: 10000 ms 00:11:21.351 00:11:21.351 NVM Command Set Attributes 00:11:21.351 ========================== 00:11:21.351 Submission Queue Entry Size 00:11:21.351 Max: 64 00:11:21.351 Min: 64 00:11:21.351 Completion Queue Entry Size 00:11:21.351 Max: 16 00:11:21.351 Min: 16 00:11:21.351 Number of Namespaces: 32 00:11:21.351 Compare Command: Supported 00:11:21.351 Write Uncorrectable Command: Not Supported 00:11:21.351 Dataset Management Command: Supported 00:11:21.351 Write Zeroes Command: Supported 00:11:21.351 Set Features Save Field: Not Supported 00:11:21.351 Reservations: Not Supported 00:11:21.351 Timestamp: Not Supported 00:11:21.351 Copy: Supported 00:11:21.351 Volatile Write Cache: Present 00:11:21.351 Atomic Write Unit (Normal): 1 00:11:21.351 Atomic Write Unit (PFail): 1 00:11:21.351 Atomic Compare & Write Unit: 1 00:11:21.351 Fused Compare & Write: Supported 00:11:21.351 Scatter-Gather List 00:11:21.351 SGL Command Set: Supported (Dword aligned) 00:11:21.351 SGL Keyed: Not Supported 00:11:21.351 SGL Bit Bucket Descriptor: Not Supported 00:11:21.351 SGL Metadata Pointer: Not Supported 00:11:21.351 Oversized SGL: Not Supported 00:11:21.351 SGL Metadata Address: Not Supported 00:11:21.351 SGL Offset: Not Supported 00:11:21.351 Transport SGL Data Block: Not Supported 00:11:21.351 Replay Protected Memory Block: Not Supported 00:11:21.351 00:11:21.351 Firmware Slot Information 00:11:21.352 ========================= 00:11:21.352 Active slot: 1 00:11:21.352 Slot 1 Firmware Revision: 24.09 00:11:21.352 00:11:21.352 00:11:21.352 Commands Supported and Effects 00:11:21.352 ============================== 00:11:21.352 Admin Commands 00:11:21.352 -------------- 00:11:21.352 Get Log Page (02h): Supported 00:11:21.352 Identify (06h): Supported 00:11:21.352 Abort (08h): Supported 00:11:21.352 Set Features (09h): Supported 00:11:21.352 Get Features (0Ah): Supported 00:11:21.352 Asynchronous Event Request (0Ch): Supported 00:11:21.352 Keep Alive (18h): Supported 00:11:21.352 I/O Commands 00:11:21.352 ------------ 00:11:21.352 Flush (00h): Supported LBA-Change 00:11:21.352 Write (01h): Supported LBA-Change 00:11:21.352 Read (02h): Supported 00:11:21.352 Compare (05h): Supported 00:11:21.352 Write Zeroes (08h): Supported LBA-Change 00:11:21.352 Dataset Management (09h): Supported LBA-Change 00:11:21.352 Copy (19h): Supported LBA-Change 00:11:21.352 00:11:21.352 Error Log 00:11:21.352 ========= 00:11:21.352 00:11:21.352 Arbitration 00:11:21.352 =========== 00:11:21.352 Arbitration Burst: 1 00:11:21.352 00:11:21.352 Power Management 00:11:21.352 ================ 00:11:21.352 Number of Power States: 1 00:11:21.352 Current Power State: Power State #0 00:11:21.352 Power State #0: 00:11:21.352 Max Power: 0.00 W 00:11:21.352 Non-Operational State: Operational 00:11:21.352 Entry Latency: Not Reported 00:11:21.352 Exit Latency: Not Reported 00:11:21.352 Relative Read Throughput: 0 00:11:21.352 Relative Read Latency: 0 00:11:21.352 Relative Write Throughput: 0 00:11:21.352 Relative Write Latency: 0 00:11:21.352 Idle Power: Not Reported 00:11:21.352 Active Power: Not Reported 00:11:21.352 Non-Operational Permissive Mode: Not Supported 00:11:21.352 00:11:21.352 Health Information 00:11:21.352 ================== 00:11:21.352 Critical Warnings: 00:11:21.352 Available Spare Space: OK 00:11:21.352 Temperature: OK 00:11:21.352 Device Reliability: OK 00:11:21.352 Read Only: No 00:11:21.352 Volatile Memory Backup: OK 00:11:21.352 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:21.352 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:21.352 Available Spare: 0% 00:11:21.352 Available Sp[2024-07-15 21:49:15.434558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:21.352 [2024-07-15 21:49:15.434565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:21.352 [2024-07-15 21:49:15.434589] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:21.352 [2024-07-15 21:49:15.434598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.352 [2024-07-15 21:49:15.434605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.352 [2024-07-15 21:49:15.434611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.352 [2024-07-15 21:49:15.434616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.352 [2024-07-15 21:49:15.438231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:21.352 [2024-07-15 21:49:15.438241] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:21.352 [2024-07-15 21:49:15.438782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:21.352 [2024-07-15 21:49:15.438827] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:21.352 [2024-07-15 21:49:15.438833] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:21.352 [2024-07-15 21:49:15.439794] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:21.352 [2024-07-15 21:49:15.439803] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:21.352 [2024-07-15 21:49:15.439851] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:21.352 [2024-07-15 21:49:15.441829] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:21.352 are Threshold: 0% 00:11:21.352 Life Percentage Used: 0% 00:11:21.352 Data Units Read: 0 00:11:21.352 Data Units Written: 0 00:11:21.352 Host Read Commands: 0 00:11:21.352 Host Write Commands: 0 00:11:21.352 Controller Busy Time: 0 minutes 00:11:21.352 Power Cycles: 0 00:11:21.352 Power On Hours: 0 hours 00:11:21.352 Unsafe Shutdowns: 0 00:11:21.352 Unrecoverable Media Errors: 0 00:11:21.352 Lifetime Error Log Entries: 0 00:11:21.352 Warning Temperature Time: 0 minutes 00:11:21.352 Critical Temperature Time: 0 minutes 00:11:21.352 00:11:21.352 Number of Queues 00:11:21.352 ================ 00:11:21.352 Number of I/O Submission Queues: 127 00:11:21.352 Number of I/O Completion Queues: 127 00:11:21.352 00:11:21.352 Active Namespaces 00:11:21.352 ================= 00:11:21.352 Namespace ID:1 00:11:21.352 Error Recovery Timeout: Unlimited 00:11:21.352 Command Set Identifier: NVM (00h) 00:11:21.352 Deallocate: Supported 00:11:21.352 Deallocated/Unwritten Error: Not Supported 00:11:21.352 Deallocated Read Value: Unknown 00:11:21.352 Deallocate in Write Zeroes: Not Supported 00:11:21.352 Deallocated Guard Field: 0xFFFF 00:11:21.352 Flush: Supported 00:11:21.352 Reservation: Supported 00:11:21.352 Namespace Sharing Capabilities: Multiple Controllers 00:11:21.352 Size (in LBAs): 131072 (0GiB) 00:11:21.352 Capacity (in LBAs): 131072 (0GiB) 00:11:21.352 Utilization (in LBAs): 131072 (0GiB) 00:11:21.352 NGUID: 9BB62CD82BA14884BA79DB8F5977861E 00:11:21.352 UUID: 9bb62cd8-2ba1-4884-ba79-db8f5977861e 00:11:21.352 Thin Provisioning: Not Supported 00:11:21.352 Per-NS Atomic Units: Yes 00:11:21.352 Atomic Boundary Size (Normal): 0 00:11:21.352 Atomic Boundary Size (PFail): 0 00:11:21.352 Atomic Boundary Offset: 0 00:11:21.352 Maximum Single Source Range Length: 65535 00:11:21.352 Maximum Copy Length: 65535 00:11:21.352 Maximum Source Range Count: 1 00:11:21.352 NGUID/EUI64 Never Reused: No 00:11:21.352 Namespace Write Protected: No 00:11:21.352 Number of LBA Formats: 1 00:11:21.352 Current LBA Format: LBA Format #00 00:11:21.352 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.352 00:11:21.352 21:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:21.352 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.610 [2024-07-15 21:49:15.671035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.884 Initializing NVMe Controllers 00:11:26.884 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:26.884 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:26.884 Initialization complete. Launching workers. 00:11:26.884 ======================================================== 00:11:26.884 Latency(us) 00:11:26.884 Device Information : IOPS MiB/s Average min max 00:11:26.884 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39917.70 155.93 3206.42 966.29 7573.66 00:11:26.884 ======================================================== 00:11:26.884 Total : 39917.70 155.93 3206.42 966.29 7573.66 00:11:26.884 00:11:26.884 [2024-07-15 21:49:20.690171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.884 21:49:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:26.884 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.884 [2024-07-15 21:49:20.919227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:32.202 Initializing NVMe Controllers 00:11:32.202 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:32.202 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:32.202 Initialization complete. Launching workers. 00:11:32.202 ======================================================== 00:11:32.202 Latency(us) 00:11:32.202 Device Information : IOPS MiB/s Average min max 00:11:32.202 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.68 62.65 7992.41 4983.96 15959.78 00:11:32.202 ======================================================== 00:11:32.202 Total : 16038.68 62.65 7992.41 4983.96 15959.78 00:11:32.202 00:11:32.202 [2024-07-15 21:49:25.972675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:32.202 21:49:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:32.202 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.202 [2024-07-15 21:49:26.170628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:37.472 [2024-07-15 21:49:31.238455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:37.472 Initializing NVMe Controllers 00:11:37.472 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:37.472 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:37.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:37.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:37.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:37.472 Initialization complete. Launching workers. 00:11:37.472 Starting thread on core 2 00:11:37.472 Starting thread on core 3 00:11:37.472 Starting thread on core 1 00:11:37.472 21:49:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:37.472 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.472 [2024-07-15 21:49:31.522393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.775 [2024-07-15 21:49:34.582268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.776 Initializing NVMe Controllers 00:11:40.776 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:40.776 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:40.776 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:40.776 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:40.776 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:40.776 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:40.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:40.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:40.776 Initialization complete. Launching workers. 00:11:40.776 Starting thread on core 1 with urgent priority queue 00:11:40.776 Starting thread on core 2 with urgent priority queue 00:11:40.776 Starting thread on core 3 with urgent priority queue 00:11:40.776 Starting thread on core 0 with urgent priority queue 00:11:40.776 SPDK bdev Controller (SPDK1 ) core 0: 4986.67 IO/s 20.05 secs/100000 ios 00:11:40.776 SPDK bdev Controller (SPDK1 ) core 1: 5333.67 IO/s 18.75 secs/100000 ios 00:11:40.776 SPDK bdev Controller (SPDK1 ) core 2: 5156.00 IO/s 19.39 secs/100000 ios 00:11:40.776 SPDK bdev Controller (SPDK1 ) core 3: 7119.67 IO/s 14.05 secs/100000 ios 00:11:40.776 ======================================================== 00:11:40.776 00:11:40.776 21:49:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:40.776 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.776 [2024-07-15 21:49:34.861829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.776 Initializing NVMe Controllers 00:11:40.776 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:40.776 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:40.776 Namespace ID: 1 size: 0GB 00:11:40.776 Initialization complete. 00:11:40.776 INFO: using host memory buffer for IO 00:11:40.776 Hello world! 00:11:40.776 [2024-07-15 21:49:34.903103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.776 21:49:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:40.776 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.035 [2024-07-15 21:49:35.172747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:41.972 Initializing NVMe Controllers 00:11:41.972 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:41.972 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:41.972 Initialization complete. Launching workers. 00:11:41.972 submit (in ns) avg, min, max = 8675.2, 3259.1, 3999594.8 00:11:41.972 complete (in ns) avg, min, max = 19528.0, 1810.4, 5991625.2 00:11:41.972 00:11:41.972 Submit histogram 00:11:41.972 ================ 00:11:41.972 Range in us Cumulative Count 00:11:41.972 3.256 - 3.270: 0.0062% ( 1) 00:11:41.972 3.270 - 3.283: 0.0247% ( 3) 00:11:41.972 3.283 - 3.297: 0.0740% ( 8) 00:11:41.972 3.297 - 3.311: 0.2900% ( 35) 00:11:41.972 3.311 - 3.325: 0.5985% ( 50) 00:11:41.972 3.325 - 3.339: 1.2773% ( 110) 00:11:41.972 3.339 - 3.353: 3.4740% ( 356) 00:11:41.972 3.353 - 3.367: 8.1390% ( 756) 00:11:41.972 3.367 - 3.381: 13.5135% ( 871) 00:11:41.972 3.381 - 3.395: 20.1098% ( 1069) 00:11:41.972 3.395 - 3.409: 26.4285% ( 1024) 00:11:41.972 3.409 - 3.423: 32.1671% ( 930) 00:11:41.972 3.423 - 3.437: 37.3442% ( 839) 00:11:41.972 3.437 - 3.450: 42.9779% ( 913) 00:11:41.972 3.450 - 3.464: 47.7231% ( 769) 00:11:41.972 3.464 - 3.478: 52.3140% ( 744) 00:11:41.972 3.478 - 3.492: 57.0961% ( 775) 00:11:41.972 3.492 - 3.506: 63.9455% ( 1110) 00:11:41.972 3.506 - 3.520: 69.3447% ( 875) 00:11:41.972 3.520 - 3.534: 73.2383% ( 631) 00:11:41.972 3.534 - 3.548: 78.1562% ( 797) 00:11:41.972 3.548 - 3.562: 81.9203% ( 610) 00:11:41.972 3.562 - 3.590: 85.9373% ( 651) 00:11:41.972 3.590 - 3.617: 87.0665% ( 183) 00:11:41.972 3.617 - 3.645: 87.9119% ( 137) 00:11:41.972 3.645 - 3.673: 89.2447% ( 216) 00:11:41.972 3.673 - 3.701: 91.3242% ( 337) 00:11:41.972 3.701 - 3.729: 93.0149% ( 274) 00:11:41.972 3.729 - 3.757: 94.7489% ( 281) 00:11:41.972 3.757 - 3.784: 96.2853% ( 249) 00:11:41.972 3.784 - 3.812: 97.7231% ( 233) 00:11:41.972 3.812 - 3.840: 98.7412% ( 165) 00:11:41.972 3.840 - 3.868: 99.1670% ( 69) 00:11:41.972 3.868 - 3.896: 99.4385% ( 44) 00:11:41.972 3.896 - 3.923: 99.5742% ( 22) 00:11:41.972 3.923 - 3.951: 99.5927% ( 3) 00:11:41.972 3.951 - 3.979: 99.6113% ( 3) 00:11:41.972 4.174 - 4.202: 99.6174% ( 1) 00:11:41.972 4.591 - 4.619: 99.6236% ( 1) 00:11:41.972 5.064 - 5.092: 99.6298% ( 1) 00:11:41.972 5.315 - 5.343: 99.6359% ( 1) 00:11:41.972 5.510 - 5.537: 99.6421% ( 1) 00:11:41.972 5.565 - 5.593: 99.6606% ( 3) 00:11:41.972 5.677 - 5.704: 99.6668% ( 1) 00:11:41.972 5.843 - 5.871: 99.6730% ( 1) 00:11:41.972 5.899 - 5.927: 99.6853% ( 2) 00:11:41.972 5.927 - 5.955: 99.6915% ( 1) 00:11:41.972 6.094 - 6.122: 99.7038% ( 2) 00:11:41.972 6.177 - 6.205: 99.7100% ( 1) 00:11:41.972 6.261 - 6.289: 99.7162% ( 1) 00:11:41.972 6.317 - 6.344: 99.7223% ( 1) 00:11:41.972 6.400 - 6.428: 99.7285% ( 1) 00:11:41.972 6.428 - 6.456: 99.7347% ( 1) 00:11:41.972 6.456 - 6.483: 99.7408% ( 1) 00:11:41.972 6.511 - 6.539: 99.7532% ( 2) 00:11:41.972 6.650 - 6.678: 99.7593% ( 1) 00:11:41.972 6.762 - 6.790: 99.7655% ( 1) 00:11:41.972 6.873 - 6.901: 99.7717% ( 1) 00:11:41.972 6.929 - 6.957: 99.7779% ( 1) 00:11:41.972 7.068 - 7.096: 99.7840% ( 1) 00:11:41.972 7.123 - 7.179: 99.7902% ( 1) 00:11:41.972 7.235 - 7.290: 99.8025% ( 2) 00:11:41.972 7.290 - 7.346: 99.8211% ( 3) 00:11:41.972 7.346 - 7.402: 99.8272% ( 1) 00:11:41.972 7.402 - 7.457: 99.8396% ( 2) 00:11:41.972 7.457 - 7.513: 99.8457% ( 1) 00:11:41.972 7.513 - 7.569: 99.8519% ( 1) 00:11:41.972 9.405 - 9.461: 99.8581% ( 1) 00:11:41.972 11.798 - 11.854: 99.8642% ( 1) 00:11:41.972 11.910 - 11.965: 99.8704% ( 1) 00:11:41.972 3989.148 - 4017.642: 100.0000% ( 21) 00:11:41.972 00:11:41.972 Complete histogram 00:11:41.972 ================== 00:11:41.972 Range in us Cumulative Count 00:11:41.972 1.809 - 1.823: 0.1604% ( 26) 00:11:41.972 1.823 - 1.837: 1.9622% ( 292) 00:11:41.972 1.837 - 1.850: 3.6283% ( 270) 00:11:41.972 1.850 - 1.864: 4.5724% ( 153) 00:11:41.972 1.864 - 1.878: 9.1818% ( 747) 00:11:41.972 1.878 - 1.892: 54.4613% ( 7338) 00:11:41.972 1.892 - [2024-07-15 21:49:36.188808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:42.231 1.906: 89.6211% ( 5698) 00:11:42.231 1.906 - 1.920: 95.0882% ( 886) 00:11:42.231 1.920 - 1.934: 96.8592% ( 287) 00:11:42.231 1.934 - 1.948: 97.4516% ( 96) 00:11:42.231 1.948 - 1.962: 98.1056% ( 106) 00:11:42.231 1.962 - 1.976: 98.8029% ( 113) 00:11:42.231 1.976 - 1.990: 99.2410% ( 71) 00:11:42.231 1.990 - 2.003: 99.3644% ( 20) 00:11:42.231 2.003 - 2.017: 99.3891% ( 4) 00:11:42.231 2.017 - 2.031: 99.4015% ( 2) 00:11:42.231 2.045 - 2.059: 99.4076% ( 1) 00:11:42.231 2.059 - 2.073: 99.4200% ( 2) 00:11:42.231 2.115 - 2.129: 99.4323% ( 2) 00:11:42.231 2.254 - 2.268: 99.4385% ( 1) 00:11:42.232 2.268 - 2.282: 99.4447% ( 1) 00:11:42.232 2.351 - 2.365: 99.4570% ( 2) 00:11:42.232 2.449 - 2.463: 99.4632% ( 1) 00:11:42.232 2.477 - 2.490: 99.4693% ( 1) 00:11:42.232 4.090 - 4.118: 99.4755% ( 1) 00:11:42.232 4.230 - 4.257: 99.4817% ( 1) 00:11:42.232 4.452 - 4.480: 99.4878% ( 1) 00:11:42.232 4.647 - 4.675: 99.4940% ( 1) 00:11:42.232 4.842 - 4.870: 99.5002% ( 1) 00:11:42.232 4.897 - 4.925: 99.5064% ( 1) 00:11:42.232 4.981 - 5.009: 99.5125% ( 1) 00:11:42.232 5.064 - 5.092: 99.5187% ( 1) 00:11:42.232 5.370 - 5.398: 99.5249% ( 1) 00:11:42.232 5.510 - 5.537: 99.5310% ( 1) 00:11:42.232 6.094 - 6.122: 99.5372% ( 1) 00:11:42.232 6.567 - 6.595: 99.5434% ( 1) 00:11:42.232 8.348 - 8.403: 99.5495% ( 1) 00:11:42.232 39.847 - 40.070: 99.5557% ( 1) 00:11:42.232 155.826 - 156.717: 99.5619% ( 1) 00:11:42.232 3989.148 - 4017.642: 99.9938% ( 70) 00:11:42.232 5983.722 - 6012.216: 100.0000% ( 1) 00:11:42.232 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:42.232 [ 00:11:42.232 { 00:11:42.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.232 "subtype": "Discovery", 00:11:42.232 "listen_addresses": [], 00:11:42.232 "allow_any_host": true, 00:11:42.232 "hosts": [] 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:42.232 "subtype": "NVMe", 00:11:42.232 "listen_addresses": [ 00:11:42.232 { 00:11:42.232 "trtype": "VFIOUSER", 00:11:42.232 "adrfam": "IPv4", 00:11:42.232 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:42.232 "trsvcid": "0" 00:11:42.232 } 00:11:42.232 ], 00:11:42.232 "allow_any_host": true, 00:11:42.232 "hosts": [], 00:11:42.232 "serial_number": "SPDK1", 00:11:42.232 "model_number": "SPDK bdev Controller", 00:11:42.232 "max_namespaces": 32, 00:11:42.232 "min_cntlid": 1, 00:11:42.232 "max_cntlid": 65519, 00:11:42.232 "namespaces": [ 00:11:42.232 { 00:11:42.232 "nsid": 1, 00:11:42.232 "bdev_name": "Malloc1", 00:11:42.232 "name": "Malloc1", 00:11:42.232 "nguid": "9BB62CD82BA14884BA79DB8F5977861E", 00:11:42.232 "uuid": "9bb62cd8-2ba1-4884-ba79-db8f5977861e" 00:11:42.232 } 00:11:42.232 ] 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:42.232 "subtype": "NVMe", 00:11:42.232 "listen_addresses": [ 00:11:42.232 { 00:11:42.232 "trtype": "VFIOUSER", 00:11:42.232 "adrfam": "IPv4", 00:11:42.232 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:42.232 "trsvcid": "0" 00:11:42.232 } 00:11:42.232 ], 00:11:42.232 "allow_any_host": true, 00:11:42.232 "hosts": [], 00:11:42.232 "serial_number": "SPDK2", 00:11:42.232 "model_number": "SPDK bdev Controller", 00:11:42.232 "max_namespaces": 32, 00:11:42.232 "min_cntlid": 1, 00:11:42.232 "max_cntlid": 65519, 00:11:42.232 "namespaces": [ 00:11:42.232 { 00:11:42.232 "nsid": 1, 00:11:42.232 "bdev_name": "Malloc2", 00:11:42.232 "name": "Malloc2", 00:11:42.232 "nguid": "97E4EF2446F34DF3BFC6E2D1BE1D10FE", 00:11:42.232 "uuid": "97e4ef24-46f3-4df3-bfc6-e2d1be1d10fe" 00:11:42.232 } 00:11:42.232 ] 00:11:42.232 } 00:11:42.232 ] 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3620187 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:42.232 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:42.232 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.490 [2024-07-15 21:49:36.566693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:42.490 Malloc3 00:11:42.490 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:42.749 [2024-07-15 21:49:36.785388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:42.749 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:42.749 Asynchronous Event Request test 00:11:42.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:42.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:42.749 Registering asynchronous event callbacks... 00:11:42.749 Starting namespace attribute notice tests for all controllers... 00:11:42.749 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:42.749 aer_cb - Changed Namespace 00:11:42.750 Cleaning up... 00:11:42.750 [ 00:11:42.750 { 00:11:42.750 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.750 "subtype": "Discovery", 00:11:42.750 "listen_addresses": [], 00:11:42.750 "allow_any_host": true, 00:11:42.750 "hosts": [] 00:11:42.750 }, 00:11:42.750 { 00:11:42.750 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:42.750 "subtype": "NVMe", 00:11:42.750 "listen_addresses": [ 00:11:42.750 { 00:11:42.750 "trtype": "VFIOUSER", 00:11:42.750 "adrfam": "IPv4", 00:11:42.750 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:42.750 "trsvcid": "0" 00:11:42.750 } 00:11:42.750 ], 00:11:42.750 "allow_any_host": true, 00:11:42.750 "hosts": [], 00:11:42.750 "serial_number": "SPDK1", 00:11:42.750 "model_number": "SPDK bdev Controller", 00:11:42.750 "max_namespaces": 32, 00:11:42.750 "min_cntlid": 1, 00:11:42.750 "max_cntlid": 65519, 00:11:42.750 "namespaces": [ 00:11:42.750 { 00:11:42.750 "nsid": 1, 00:11:42.750 "bdev_name": "Malloc1", 00:11:42.750 "name": "Malloc1", 00:11:42.750 "nguid": "9BB62CD82BA14884BA79DB8F5977861E", 00:11:42.750 "uuid": "9bb62cd8-2ba1-4884-ba79-db8f5977861e" 00:11:42.750 }, 00:11:42.750 { 00:11:42.750 "nsid": 2, 00:11:42.750 "bdev_name": "Malloc3", 00:11:42.750 "name": "Malloc3", 00:11:42.750 "nguid": "7884966455BF4FD2B5BF1D134E39D6AB", 00:11:42.750 "uuid": "78849664-55bf-4fd2-b5bf-1d134e39d6ab" 00:11:42.750 } 00:11:42.750 ] 00:11:42.750 }, 00:11:42.750 { 00:11:42.750 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:42.750 "subtype": "NVMe", 00:11:42.750 "listen_addresses": [ 00:11:42.750 { 00:11:42.750 "trtype": "VFIOUSER", 00:11:42.750 "adrfam": "IPv4", 00:11:42.750 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:42.750 "trsvcid": "0" 00:11:42.750 } 00:11:42.750 ], 00:11:42.750 "allow_any_host": true, 00:11:42.750 "hosts": [], 00:11:42.750 "serial_number": "SPDK2", 00:11:42.750 "model_number": "SPDK bdev Controller", 00:11:42.750 "max_namespaces": 32, 00:11:42.750 "min_cntlid": 1, 00:11:42.750 "max_cntlid": 65519, 00:11:42.750 "namespaces": [ 00:11:42.750 { 00:11:42.750 "nsid": 1, 00:11:42.750 "bdev_name": "Malloc2", 00:11:42.750 "name": "Malloc2", 00:11:42.750 "nguid": "97E4EF2446F34DF3BFC6E2D1BE1D10FE", 00:11:42.750 "uuid": "97e4ef24-46f3-4df3-bfc6-e2d1be1d10fe" 00:11:42.750 } 00:11:42.750 ] 00:11:42.750 } 00:11:42.750 ] 00:11:43.011 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3620187 00:11:43.011 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.011 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:43.011 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:43.011 21:49:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:43.011 [2024-07-15 21:49:37.009802] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:11:43.011 [2024-07-15 21:49:37.009827] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620203 ] 00:11:43.011 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.011 [2024-07-15 21:49:37.038565] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:43.011 [2024-07-15 21:49:37.040802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:43.012 [2024-07-15 21:49:37.040822] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbf6d561000 00:11:43.012 [2024-07-15 21:49:37.041805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.042806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.043811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.044817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.045821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.046828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.047838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.048844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.012 [2024-07-15 21:49:37.049854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:43.012 [2024-07-15 21:49:37.049863] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbf6d556000 00:11:43.012 [2024-07-15 21:49:37.050923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:43.012 [2024-07-15 21:49:37.068269] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:43.012 [2024-07-15 21:49:37.068289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:43.012 [2024-07-15 21:49:37.070361] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:43.012 [2024-07-15 21:49:37.070398] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:43.012 [2024-07-15 21:49:37.070465] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:43.012 [2024-07-15 21:49:37.070479] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:43.012 [2024-07-15 21:49:37.070484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:43.012 [2024-07-15 21:49:37.071369] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:43.012 [2024-07-15 21:49:37.071379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:43.012 [2024-07-15 21:49:37.071386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:43.012 [2024-07-15 21:49:37.072375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:43.012 [2024-07-15 21:49:37.072383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:43.012 [2024-07-15 21:49:37.072390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.073380] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:43.012 [2024-07-15 21:49:37.073388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.074390] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:43.012 [2024-07-15 21:49:37.074398] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:43.012 [2024-07-15 21:49:37.074402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.074408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.074513] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:43.012 [2024-07-15 21:49:37.074517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.074521] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:43.012 [2024-07-15 21:49:37.075396] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:43.012 [2024-07-15 21:49:37.076403] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:43.012 [2024-07-15 21:49:37.077418] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:43.012 [2024-07-15 21:49:37.078416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:43.012 [2024-07-15 21:49:37.078453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:43.012 [2024-07-15 21:49:37.079428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:43.012 [2024-07-15 21:49:37.079437] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:43.012 [2024-07-15 21:49:37.079441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.079458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:43.012 [2024-07-15 21:49:37.079467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.079481] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.012 [2024-07-15 21:49:37.079485] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.012 [2024-07-15 21:49:37.079496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.087232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:43.012 [2024-07-15 21:49:37.087244] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:43.012 [2024-07-15 21:49:37.087251] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:43.012 [2024-07-15 21:49:37.087255] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:43.012 [2024-07-15 21:49:37.087259] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:43.012 [2024-07-15 21:49:37.087263] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:43.012 [2024-07-15 21:49:37.087267] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:43.012 [2024-07-15 21:49:37.087271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.087279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.087288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.095230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:43.012 [2024-07-15 21:49:37.095244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.012 [2024-07-15 21:49:37.095252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.012 [2024-07-15 21:49:37.095259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.012 [2024-07-15 21:49:37.095267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.012 [2024-07-15 21:49:37.095271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.095279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.095287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.103230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:43.012 [2024-07-15 21:49:37.103237] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:43.012 [2024-07-15 21:49:37.103242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.103248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.103253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.103261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.111231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:43.012 [2024-07-15 21:49:37.111286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.111293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.111302] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:43.012 [2024-07-15 21:49:37.111306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:43.012 [2024-07-15 21:49:37.111312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.119230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:43.012 [2024-07-15 21:49:37.119240] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:43.012 [2024-07-15 21:49:37.119248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.119254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:43.012 [2024-07-15 21:49:37.119261] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.012 [2024-07-15 21:49:37.119265] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.012 [2024-07-15 21:49:37.119270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.012 [2024-07-15 21:49:37.127229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.127242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.127249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.127258] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.013 [2024-07-15 21:49:37.127263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.013 [2024-07-15 21:49:37.127271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.135232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.135243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135279] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:43.013 [2024-07-15 21:49:37.135283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:43.013 [2024-07-15 21:49:37.135290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:43.013 [2024-07-15 21:49:37.135306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.143231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.143243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.151229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.151241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.159231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.159243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.167230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.167246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:43.013 [2024-07-15 21:49:37.167250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:43.013 [2024-07-15 21:49:37.167253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:43.013 [2024-07-15 21:49:37.167256] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:43.013 [2024-07-15 21:49:37.167262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:43.013 [2024-07-15 21:49:37.167268] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:43.013 [2024-07-15 21:49:37.167272] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:43.013 [2024-07-15 21:49:37.167278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.167284] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:43.013 [2024-07-15 21:49:37.167287] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.013 [2024-07-15 21:49:37.167293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.167300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:43.013 [2024-07-15 21:49:37.167303] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:43.013 [2024-07-15 21:49:37.167308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:43.013 [2024-07-15 21:49:37.175229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.175243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.175252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:43.013 [2024-07-15 21:49:37.175258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:43.013 ===================================================== 00:11:43.013 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:43.013 ===================================================== 00:11:43.013 Controller Capabilities/Features 00:11:43.013 ================================ 00:11:43.013 Vendor ID: 4e58 00:11:43.013 Subsystem Vendor ID: 4e58 00:11:43.013 Serial Number: SPDK2 00:11:43.013 Model Number: SPDK bdev Controller 00:11:43.013 Firmware Version: 24.09 00:11:43.013 Recommended Arb Burst: 6 00:11:43.013 IEEE OUI Identifier: 8d 6b 50 00:11:43.013 Multi-path I/O 00:11:43.013 May have multiple subsystem ports: Yes 00:11:43.013 May have multiple controllers: Yes 00:11:43.013 Associated with SR-IOV VF: No 00:11:43.013 Max Data Transfer Size: 131072 00:11:43.013 Max Number of Namespaces: 32 00:11:43.013 Max Number of I/O Queues: 127 00:11:43.013 NVMe Specification Version (VS): 1.3 00:11:43.013 NVMe Specification Version (Identify): 1.3 00:11:43.013 Maximum Queue Entries: 256 00:11:43.013 Contiguous Queues Required: Yes 00:11:43.013 Arbitration Mechanisms Supported 00:11:43.013 Weighted Round Robin: Not Supported 00:11:43.013 Vendor Specific: Not Supported 00:11:43.013 Reset Timeout: 15000 ms 00:11:43.013 Doorbell Stride: 4 bytes 00:11:43.013 NVM Subsystem Reset: Not Supported 00:11:43.013 Command Sets Supported 00:11:43.013 NVM Command Set: Supported 00:11:43.013 Boot Partition: Not Supported 00:11:43.013 Memory Page Size Minimum: 4096 bytes 00:11:43.013 Memory Page Size Maximum: 4096 bytes 00:11:43.013 Persistent Memory Region: Not Supported 00:11:43.013 Optional Asynchronous Events Supported 00:11:43.013 Namespace Attribute Notices: Supported 00:11:43.013 Firmware Activation Notices: Not Supported 00:11:43.013 ANA Change Notices: Not Supported 00:11:43.013 PLE Aggregate Log Change Notices: Not Supported 00:11:43.013 LBA Status Info Alert Notices: Not Supported 00:11:43.013 EGE Aggregate Log Change Notices: Not Supported 00:11:43.013 Normal NVM Subsystem Shutdown event: Not Supported 00:11:43.013 Zone Descriptor Change Notices: Not Supported 00:11:43.013 Discovery Log Change Notices: Not Supported 00:11:43.013 Controller Attributes 00:11:43.013 128-bit Host Identifier: Supported 00:11:43.013 Non-Operational Permissive Mode: Not Supported 00:11:43.013 NVM Sets: Not Supported 00:11:43.013 Read Recovery Levels: Not Supported 00:11:43.013 Endurance Groups: Not Supported 00:11:43.013 Predictable Latency Mode: Not Supported 00:11:43.013 Traffic Based Keep ALive: Not Supported 00:11:43.013 Namespace Granularity: Not Supported 00:11:43.013 SQ Associations: Not Supported 00:11:43.013 UUID List: Not Supported 00:11:43.013 Multi-Domain Subsystem: Not Supported 00:11:43.013 Fixed Capacity Management: Not Supported 00:11:43.013 Variable Capacity Management: Not Supported 00:11:43.013 Delete Endurance Group: Not Supported 00:11:43.013 Delete NVM Set: Not Supported 00:11:43.013 Extended LBA Formats Supported: Not Supported 00:11:43.013 Flexible Data Placement Supported: Not Supported 00:11:43.013 00:11:43.013 Controller Memory Buffer Support 00:11:43.013 ================================ 00:11:43.013 Supported: No 00:11:43.013 00:11:43.013 Persistent Memory Region Support 00:11:43.013 ================================ 00:11:43.013 Supported: No 00:11:43.013 00:11:43.013 Admin Command Set Attributes 00:11:43.013 ============================ 00:11:43.013 Security Send/Receive: Not Supported 00:11:43.013 Format NVM: Not Supported 00:11:43.013 Firmware Activate/Download: Not Supported 00:11:43.013 Namespace Management: Not Supported 00:11:43.013 Device Self-Test: Not Supported 00:11:43.013 Directives: Not Supported 00:11:43.013 NVMe-MI: Not Supported 00:11:43.013 Virtualization Management: Not Supported 00:11:43.013 Doorbell Buffer Config: Not Supported 00:11:43.013 Get LBA Status Capability: Not Supported 00:11:43.013 Command & Feature Lockdown Capability: Not Supported 00:11:43.013 Abort Command Limit: 4 00:11:43.013 Async Event Request Limit: 4 00:11:43.013 Number of Firmware Slots: N/A 00:11:43.013 Firmware Slot 1 Read-Only: N/A 00:11:43.013 Firmware Activation Without Reset: N/A 00:11:43.013 Multiple Update Detection Support: N/A 00:11:43.013 Firmware Update Granularity: No Information Provided 00:11:43.013 Per-Namespace SMART Log: No 00:11:43.013 Asymmetric Namespace Access Log Page: Not Supported 00:11:43.013 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:43.013 Command Effects Log Page: Supported 00:11:43.013 Get Log Page Extended Data: Supported 00:11:43.013 Telemetry Log Pages: Not Supported 00:11:43.013 Persistent Event Log Pages: Not Supported 00:11:43.013 Supported Log Pages Log Page: May Support 00:11:43.013 Commands Supported & Effects Log Page: Not Supported 00:11:43.013 Feature Identifiers & Effects Log Page:May Support 00:11:43.013 NVMe-MI Commands & Effects Log Page: May Support 00:11:43.013 Data Area 4 for Telemetry Log: Not Supported 00:11:43.013 Error Log Page Entries Supported: 128 00:11:43.013 Keep Alive: Supported 00:11:43.013 Keep Alive Granularity: 10000 ms 00:11:43.013 00:11:43.013 NVM Command Set Attributes 00:11:43.013 ========================== 00:11:43.013 Submission Queue Entry Size 00:11:43.013 Max: 64 00:11:43.014 Min: 64 00:11:43.014 Completion Queue Entry Size 00:11:43.014 Max: 16 00:11:43.014 Min: 16 00:11:43.014 Number of Namespaces: 32 00:11:43.014 Compare Command: Supported 00:11:43.014 Write Uncorrectable Command: Not Supported 00:11:43.014 Dataset Management Command: Supported 00:11:43.014 Write Zeroes Command: Supported 00:11:43.014 Set Features Save Field: Not Supported 00:11:43.014 Reservations: Not Supported 00:11:43.014 Timestamp: Not Supported 00:11:43.014 Copy: Supported 00:11:43.014 Volatile Write Cache: Present 00:11:43.014 Atomic Write Unit (Normal): 1 00:11:43.014 Atomic Write Unit (PFail): 1 00:11:43.014 Atomic Compare & Write Unit: 1 00:11:43.014 Fused Compare & Write: Supported 00:11:43.014 Scatter-Gather List 00:11:43.014 SGL Command Set: Supported (Dword aligned) 00:11:43.014 SGL Keyed: Not Supported 00:11:43.014 SGL Bit Bucket Descriptor: Not Supported 00:11:43.014 SGL Metadata Pointer: Not Supported 00:11:43.014 Oversized SGL: Not Supported 00:11:43.014 SGL Metadata Address: Not Supported 00:11:43.014 SGL Offset: Not Supported 00:11:43.014 Transport SGL Data Block: Not Supported 00:11:43.014 Replay Protected Memory Block: Not Supported 00:11:43.014 00:11:43.014 Firmware Slot Information 00:11:43.014 ========================= 00:11:43.014 Active slot: 1 00:11:43.014 Slot 1 Firmware Revision: 24.09 00:11:43.014 00:11:43.014 00:11:43.014 Commands Supported and Effects 00:11:43.014 ============================== 00:11:43.014 Admin Commands 00:11:43.014 -------------- 00:11:43.014 Get Log Page (02h): Supported 00:11:43.014 Identify (06h): Supported 00:11:43.014 Abort (08h): Supported 00:11:43.014 Set Features (09h): Supported 00:11:43.014 Get Features (0Ah): Supported 00:11:43.014 Asynchronous Event Request (0Ch): Supported 00:11:43.014 Keep Alive (18h): Supported 00:11:43.014 I/O Commands 00:11:43.014 ------------ 00:11:43.014 Flush (00h): Supported LBA-Change 00:11:43.014 Write (01h): Supported LBA-Change 00:11:43.014 Read (02h): Supported 00:11:43.014 Compare (05h): Supported 00:11:43.014 Write Zeroes (08h): Supported LBA-Change 00:11:43.014 Dataset Management (09h): Supported LBA-Change 00:11:43.014 Copy (19h): Supported LBA-Change 00:11:43.014 00:11:43.014 Error Log 00:11:43.014 ========= 00:11:43.014 00:11:43.014 Arbitration 00:11:43.014 =========== 00:11:43.014 Arbitration Burst: 1 00:11:43.014 00:11:43.014 Power Management 00:11:43.014 ================ 00:11:43.014 Number of Power States: 1 00:11:43.014 Current Power State: Power State #0 00:11:43.014 Power State #0: 00:11:43.014 Max Power: 0.00 W 00:11:43.014 Non-Operational State: Operational 00:11:43.014 Entry Latency: Not Reported 00:11:43.014 Exit Latency: Not Reported 00:11:43.014 Relative Read Throughput: 0 00:11:43.014 Relative Read Latency: 0 00:11:43.014 Relative Write Throughput: 0 00:11:43.014 Relative Write Latency: 0 00:11:43.014 Idle Power: Not Reported 00:11:43.014 Active Power: Not Reported 00:11:43.014 Non-Operational Permissive Mode: Not Supported 00:11:43.014 00:11:43.014 Health Information 00:11:43.014 ================== 00:11:43.014 Critical Warnings: 00:11:43.014 Available Spare Space: OK 00:11:43.014 Temperature: OK 00:11:43.014 Device Reliability: OK 00:11:43.014 Read Only: No 00:11:43.014 Volatile Memory Backup: OK 00:11:43.014 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:43.014 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:43.014 Available Spare: 0% 00:11:43.014 Available Sp[2024-07-15 21:49:37.175343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:43.014 [2024-07-15 21:49:37.183230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:43.014 [2024-07-15 21:49:37.183259] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:43.014 [2024-07-15 21:49:37.183267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.014 [2024-07-15 21:49:37.183273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.014 [2024-07-15 21:49:37.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.014 [2024-07-15 21:49:37.183284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.014 [2024-07-15 21:49:37.183327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:43.014 [2024-07-15 21:49:37.183337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:43.014 [2024-07-15 21:49:37.184328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:43.014 [2024-07-15 21:49:37.184370] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:43.014 [2024-07-15 21:49:37.184376] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:43.014 [2024-07-15 21:49:37.185333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:43.014 [2024-07-15 21:49:37.185344] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:43.014 [2024-07-15 21:49:37.185389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:43.014 [2024-07-15 21:49:37.186485] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:43.014 are Threshold: 0% 00:11:43.014 Life Percentage Used: 0% 00:11:43.014 Data Units Read: 0 00:11:43.014 Data Units Written: 0 00:11:43.014 Host Read Commands: 0 00:11:43.014 Host Write Commands: 0 00:11:43.014 Controller Busy Time: 0 minutes 00:11:43.014 Power Cycles: 0 00:11:43.014 Power On Hours: 0 hours 00:11:43.014 Unsafe Shutdowns: 0 00:11:43.014 Unrecoverable Media Errors: 0 00:11:43.014 Lifetime Error Log Entries: 0 00:11:43.014 Warning Temperature Time: 0 minutes 00:11:43.014 Critical Temperature Time: 0 minutes 00:11:43.014 00:11:43.014 Number of Queues 00:11:43.014 ================ 00:11:43.014 Number of I/O Submission Queues: 127 00:11:43.014 Number of I/O Completion Queues: 127 00:11:43.014 00:11:43.014 Active Namespaces 00:11:43.014 ================= 00:11:43.014 Namespace ID:1 00:11:43.014 Error Recovery Timeout: Unlimited 00:11:43.014 Command Set Identifier: NVM (00h) 00:11:43.014 Deallocate: Supported 00:11:43.014 Deallocated/Unwritten Error: Not Supported 00:11:43.014 Deallocated Read Value: Unknown 00:11:43.014 Deallocate in Write Zeroes: Not Supported 00:11:43.014 Deallocated Guard Field: 0xFFFF 00:11:43.014 Flush: Supported 00:11:43.014 Reservation: Supported 00:11:43.014 Namespace Sharing Capabilities: Multiple Controllers 00:11:43.014 Size (in LBAs): 131072 (0GiB) 00:11:43.014 Capacity (in LBAs): 131072 (0GiB) 00:11:43.014 Utilization (in LBAs): 131072 (0GiB) 00:11:43.014 NGUID: 97E4EF2446F34DF3BFC6E2D1BE1D10FE 00:11:43.014 UUID: 97e4ef24-46f3-4df3-bfc6-e2d1be1d10fe 00:11:43.014 Thin Provisioning: Not Supported 00:11:43.014 Per-NS Atomic Units: Yes 00:11:43.014 Atomic Boundary Size (Normal): 0 00:11:43.014 Atomic Boundary Size (PFail): 0 00:11:43.014 Atomic Boundary Offset: 0 00:11:43.014 Maximum Single Source Range Length: 65535 00:11:43.014 Maximum Copy Length: 65535 00:11:43.014 Maximum Source Range Count: 1 00:11:43.014 NGUID/EUI64 Never Reused: No 00:11:43.014 Namespace Write Protected: No 00:11:43.014 Number of LBA Formats: 1 00:11:43.014 Current LBA Format: LBA Format #00 00:11:43.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.014 00:11:43.014 21:49:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:43.274 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.274 [2024-07-15 21:49:37.405487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.556 Initializing NVMe Controllers 00:11:48.556 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:48.556 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:48.556 Initialization complete. Launching workers. 00:11:48.556 ======================================================== 00:11:48.556 Latency(us) 00:11:48.556 Device Information : IOPS MiB/s Average min max 00:11:48.556 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39906.90 155.89 3207.28 978.31 7545.03 00:11:48.556 ======================================================== 00:11:48.556 Total : 39906.90 155.89 3207.28 978.31 7545.03 00:11:48.556 00:11:48.556 [2024-07-15 21:49:42.511475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.556 21:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:48.556 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.556 [2024-07-15 21:49:42.743219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:53.830 Initializing NVMe Controllers 00:11:53.830 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:53.830 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:53.830 Initialization complete. Launching workers. 00:11:53.830 ======================================================== 00:11:53.830 Latency(us) 00:11:53.830 Device Information : IOPS MiB/s Average min max 00:11:53.830 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39960.76 156.10 3202.74 960.94 6640.88 00:11:53.830 ======================================================== 00:11:53.830 Total : 39960.76 156.10 3202.74 960.94 6640.88 00:11:53.830 00:11:53.830 [2024-07-15 21:49:47.761569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:53.830 21:49:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:53.830 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.830 [2024-07-15 21:49:47.955642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:59.209 [2024-07-15 21:49:53.092317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:59.209 Initializing NVMe Controllers 00:11:59.209 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:59.209 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:59.209 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:59.209 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:59.209 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:59.209 Initialization complete. Launching workers. 00:11:59.209 Starting thread on core 2 00:11:59.209 Starting thread on core 3 00:11:59.209 Starting thread on core 1 00:11:59.209 21:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:59.209 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.209 [2024-07-15 21:49:53.377689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:02.496 [2024-07-15 21:49:56.439805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:02.496 Initializing NVMe Controllers 00:12:02.496 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:02.496 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:02.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:02.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:02.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:02.496 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:02.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:02.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:02.496 Initialization complete. Launching workers. 00:12:02.496 Starting thread on core 1 with urgent priority queue 00:12:02.496 Starting thread on core 2 with urgent priority queue 00:12:02.496 Starting thread on core 3 with urgent priority queue 00:12:02.496 Starting thread on core 0 with urgent priority queue 00:12:02.496 SPDK bdev Controller (SPDK2 ) core 0: 8216.33 IO/s 12.17 secs/100000 ios 00:12:02.496 SPDK bdev Controller (SPDK2 ) core 1: 8275.00 IO/s 12.08 secs/100000 ios 00:12:02.496 SPDK bdev Controller (SPDK2 ) core 2: 6991.00 IO/s 14.30 secs/100000 ios 00:12:02.496 SPDK bdev Controller (SPDK2 ) core 3: 8584.33 IO/s 11.65 secs/100000 ios 00:12:02.496 ======================================================== 00:12:02.496 00:12:02.496 21:49:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:02.496 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.496 [2024-07-15 21:49:56.719681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:02.496 Initializing NVMe Controllers 00:12:02.496 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:02.496 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:02.496 Namespace ID: 1 size: 0GB 00:12:02.496 Initialization complete. 00:12:02.496 INFO: using host memory buffer for IO 00:12:02.496 Hello world! 00:12:02.496 [2024-07-15 21:49:56.729749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:02.755 21:49:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:02.755 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.014 [2024-07-15 21:49:57.006069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:03.951 Initializing NVMe Controllers 00:12:03.951 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:03.951 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:03.951 Initialization complete. Launching workers. 00:12:03.951 submit (in ns) avg, min, max = 7441.5, 3248.7, 3998872.2 00:12:03.951 complete (in ns) avg, min, max = 21677.3, 1810.4, 3997849.6 00:12:03.951 00:12:03.951 Submit histogram 00:12:03.951 ================ 00:12:03.951 Range in us Cumulative Count 00:12:03.951 3.242 - 3.256: 0.0062% ( 1) 00:12:03.951 3.256 - 3.270: 0.0124% ( 1) 00:12:03.951 3.270 - 3.283: 0.0496% ( 6) 00:12:03.951 3.283 - 3.297: 0.3904% ( 55) 00:12:03.951 3.297 - 3.311: 1.3756% ( 159) 00:12:03.951 3.311 - 3.325: 2.4662% ( 176) 00:12:03.951 3.325 - 3.339: 5.2857% ( 455) 00:12:03.951 3.339 - 3.353: 10.2367% ( 799) 00:12:03.951 3.353 - 3.367: 15.9128% ( 916) 00:12:03.951 3.367 - 3.381: 21.3471% ( 877) 00:12:03.951 3.381 - 3.395: 27.5499% ( 1001) 00:12:03.951 3.395 - 3.409: 33.4614% ( 954) 00:12:03.951 3.409 - 3.423: 38.1336% ( 754) 00:12:03.951 3.423 - 3.437: 43.4998% ( 866) 00:12:03.951 3.437 - 3.450: 48.6430% ( 830) 00:12:03.951 3.450 - 3.464: 52.7823% ( 668) 00:12:03.951 3.464 - 3.478: 56.6861% ( 630) 00:12:03.951 3.478 - 3.492: 62.8393% ( 993) 00:12:03.951 3.492 - 3.506: 69.2465% ( 1034) 00:12:03.951 3.506 - 3.520: 73.2495% ( 646) 00:12:03.951 3.520 - 3.534: 77.5871% ( 700) 00:12:03.951 3.534 - 3.548: 82.0300% ( 717) 00:12:03.951 3.548 - 3.562: 84.6016% ( 415) 00:12:03.951 3.562 - 3.590: 87.1360% ( 409) 00:12:03.951 3.590 - 3.617: 88.1212% ( 159) 00:12:03.951 3.617 - 3.645: 89.2738% ( 186) 00:12:03.951 3.645 - 3.673: 90.9097% ( 264) 00:12:03.951 3.673 - 3.701: 92.6881% ( 287) 00:12:03.951 3.701 - 3.729: 94.2186% ( 247) 00:12:03.951 3.729 - 3.757: 95.8669% ( 266) 00:12:03.951 3.757 - 3.784: 97.3293% ( 236) 00:12:03.951 3.784 - 3.812: 98.3703% ( 168) 00:12:03.951 3.812 - 3.840: 98.8908% ( 84) 00:12:03.951 3.840 - 3.868: 99.3184% ( 69) 00:12:03.951 3.868 - 3.896: 99.5105% ( 31) 00:12:03.951 3.896 - 3.923: 99.5600% ( 8) 00:12:03.951 3.923 - 3.951: 99.5972% ( 6) 00:12:03.951 3.951 - 3.979: 99.6034% ( 1) 00:12:03.951 3.979 - 4.007: 99.6096% ( 1) 00:12:03.951 4.007 - 4.035: 99.6220% ( 2) 00:12:03.951 4.063 - 4.090: 99.6282% ( 1) 00:12:03.951 4.090 - 4.118: 99.6344% ( 1) 00:12:03.951 4.174 - 4.202: 99.6406% ( 1) 00:12:03.951 4.981 - 5.009: 99.6468% ( 1) 00:12:03.951 5.120 - 5.148: 99.6530% ( 1) 00:12:03.951 5.203 - 5.231: 99.6592% ( 1) 00:12:03.951 5.315 - 5.343: 99.6716% ( 2) 00:12:03.951 5.343 - 5.370: 99.6778% ( 1) 00:12:03.951 5.370 - 5.398: 99.6840% ( 1) 00:12:03.951 5.398 - 5.426: 99.6964% ( 2) 00:12:03.951 5.454 - 5.482: 99.7150% ( 3) 00:12:03.951 5.482 - 5.510: 99.7274% ( 2) 00:12:03.951 5.510 - 5.537: 99.7459% ( 3) 00:12:03.951 5.593 - 5.621: 99.7521% ( 1) 00:12:03.951 5.621 - 5.649: 99.7645% ( 2) 00:12:03.951 5.649 - 5.677: 99.7707% ( 1) 00:12:03.951 5.732 - 5.760: 99.7769% ( 1) 00:12:03.951 5.760 - 5.788: 99.7955% ( 3) 00:12:03.951 5.843 - 5.871: 99.8017% ( 1) 00:12:03.951 5.927 - 5.955: 99.8079% ( 1) 00:12:03.951 6.122 - 6.150: 99.8141% ( 1) 00:12:03.951 6.233 - 6.261: 99.8203% ( 1) 00:12:03.951 6.289 - 6.317: 99.8265% ( 1) 00:12:03.951 6.344 - 6.372: 99.8327% ( 1) 00:12:03.951 6.400 - 6.428: 99.8451% ( 2) 00:12:03.951 6.762 - 6.790: 99.8513% ( 1) 00:12:03.951 6.984 - 7.012: 99.8575% ( 1) 00:12:03.951 7.012 - 7.040: 99.8637% ( 1) 00:12:03.951 7.096 - 7.123: 99.8699% ( 1) 00:12:03.951 7.402 - 7.457: 99.8761% ( 1) 00:12:03.951 7.903 - 7.958: 99.8823% ( 1) 00:12:03.951 8.181 - 8.237: 99.8885% ( 1) 00:12:03.951 9.405 - 9.461: 99.8947% ( 1) 00:12:03.951 13.134 - 13.190: 99.9009% ( 1) 00:12:03.951 3989.148 - 4017.642: 100.0000% ( 16) 00:12:03.951 00:12:03.951 Complete histogram 00:12:03.951 ================== 00:12:03.951 Range in us Cumulative Count 00:12:03.951 1.809 - 1.823: 0.9976% ( 161) 00:12:03.951 1.823 - 1.837: 4.5111% ( 567) 00:12:03.952 1.837 - [2024-07-15 21:49:58.099280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:03.952 1.850: 6.4258% ( 309) 00:12:03.952 1.850 - 1.864: 14.5185% ( 1306) 00:12:03.952 1.864 - 1.878: 63.1801% ( 7853) 00:12:03.952 1.878 - 1.892: 91.1141% ( 4508) 00:12:03.952 1.892 - 1.906: 95.0799% ( 640) 00:12:03.952 1.906 - 1.920: 96.9699% ( 305) 00:12:03.952 1.920 - 1.934: 97.5833% ( 99) 00:12:03.952 1.934 - 1.948: 98.1844% ( 97) 00:12:03.952 1.948 - 1.962: 98.8227% ( 103) 00:12:03.952 1.962 - 1.976: 99.0581% ( 38) 00:12:03.952 1.976 - 1.990: 99.1511% ( 15) 00:12:03.952 1.990 - 2.003: 99.1944% ( 7) 00:12:03.952 2.003 - 2.017: 99.2130% ( 3) 00:12:03.952 2.017 - 2.031: 99.2192% ( 1) 00:12:03.952 2.031 - 2.045: 99.2378% ( 3) 00:12:03.952 2.045 - 2.059: 99.2564% ( 3) 00:12:03.952 2.059 - 2.073: 99.2626% ( 1) 00:12:03.952 2.073 - 2.087: 99.2812% ( 3) 00:12:03.952 2.087 - 2.101: 99.2874% ( 1) 00:12:03.952 2.115 - 2.129: 99.2998% ( 2) 00:12:03.952 2.129 - 2.143: 99.3122% ( 2) 00:12:03.952 2.240 - 2.254: 99.3184% ( 1) 00:12:03.952 2.296 - 2.310: 99.3246% ( 1) 00:12:03.952 2.323 - 2.337: 99.3370% ( 2) 00:12:03.952 2.449 - 2.463: 99.3432% ( 1) 00:12:03.952 3.409 - 3.423: 99.3494% ( 1) 00:12:03.952 3.437 - 3.450: 99.3556% ( 1) 00:12:03.952 3.812 - 3.840: 99.3680% ( 2) 00:12:03.952 4.090 - 4.118: 99.3741% ( 1) 00:12:03.952 4.118 - 4.146: 99.3803% ( 1) 00:12:03.952 4.230 - 4.257: 99.3865% ( 1) 00:12:03.952 4.313 - 4.341: 99.3927% ( 1) 00:12:03.952 4.369 - 4.397: 99.3989% ( 1) 00:12:03.952 4.397 - 4.424: 99.4051% ( 1) 00:12:03.952 4.480 - 4.508: 99.4113% ( 1) 00:12:03.952 4.508 - 4.536: 99.4175% ( 1) 00:12:03.952 4.563 - 4.591: 99.4299% ( 2) 00:12:03.952 4.591 - 4.619: 99.4423% ( 2) 00:12:03.952 4.619 - 4.647: 99.4485% ( 1) 00:12:03.952 4.675 - 4.703: 99.4547% ( 1) 00:12:03.952 4.758 - 4.786: 99.4609% ( 1) 00:12:03.952 4.786 - 4.814: 99.4671% ( 1) 00:12:03.952 4.842 - 4.870: 99.4733% ( 1) 00:12:03.952 4.870 - 4.897: 99.4795% ( 1) 00:12:03.952 5.009 - 5.037: 99.4857% ( 1) 00:12:03.952 5.315 - 5.343: 99.4919% ( 1) 00:12:03.952 5.398 - 5.426: 99.4981% ( 1) 00:12:03.952 6.623 - 6.650: 99.5043% ( 1) 00:12:03.952 3989.148 - 4017.642: 100.0000% ( 80) 00:12:03.952 00:12:03.952 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:03.952 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:03.952 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:03.952 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:03.952 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:04.211 [ 00:12:04.211 { 00:12:04.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:04.211 "subtype": "Discovery", 00:12:04.211 "listen_addresses": [], 00:12:04.211 "allow_any_host": true, 00:12:04.211 "hosts": [] 00:12:04.211 }, 00:12:04.211 { 00:12:04.211 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:04.211 "subtype": "NVMe", 00:12:04.211 "listen_addresses": [ 00:12:04.211 { 00:12:04.211 "trtype": "VFIOUSER", 00:12:04.211 "adrfam": "IPv4", 00:12:04.211 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:04.211 "trsvcid": "0" 00:12:04.211 } 00:12:04.211 ], 00:12:04.211 "allow_any_host": true, 00:12:04.211 "hosts": [], 00:12:04.211 "serial_number": "SPDK1", 00:12:04.211 "model_number": "SPDK bdev Controller", 00:12:04.211 "max_namespaces": 32, 00:12:04.211 "min_cntlid": 1, 00:12:04.211 "max_cntlid": 65519, 00:12:04.211 "namespaces": [ 00:12:04.211 { 00:12:04.211 "nsid": 1, 00:12:04.211 "bdev_name": "Malloc1", 00:12:04.211 "name": "Malloc1", 00:12:04.211 "nguid": "9BB62CD82BA14884BA79DB8F5977861E", 00:12:04.211 "uuid": "9bb62cd8-2ba1-4884-ba79-db8f5977861e" 00:12:04.211 }, 00:12:04.211 { 00:12:04.211 "nsid": 2, 00:12:04.211 "bdev_name": "Malloc3", 00:12:04.211 "name": "Malloc3", 00:12:04.211 "nguid": "7884966455BF4FD2B5BF1D134E39D6AB", 00:12:04.211 "uuid": "78849664-55bf-4fd2-b5bf-1d134e39d6ab" 00:12:04.211 } 00:12:04.211 ] 00:12:04.211 }, 00:12:04.211 { 00:12:04.211 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:04.211 "subtype": "NVMe", 00:12:04.211 "listen_addresses": [ 00:12:04.211 { 00:12:04.211 "trtype": "VFIOUSER", 00:12:04.211 "adrfam": "IPv4", 00:12:04.211 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:04.211 "trsvcid": "0" 00:12:04.211 } 00:12:04.211 ], 00:12:04.211 "allow_any_host": true, 00:12:04.211 "hosts": [], 00:12:04.211 "serial_number": "SPDK2", 00:12:04.211 "model_number": "SPDK bdev Controller", 00:12:04.211 "max_namespaces": 32, 00:12:04.211 "min_cntlid": 1, 00:12:04.211 "max_cntlid": 65519, 00:12:04.211 "namespaces": [ 00:12:04.211 { 00:12:04.211 "nsid": 1, 00:12:04.211 "bdev_name": "Malloc2", 00:12:04.211 "name": "Malloc2", 00:12:04.211 "nguid": "97E4EF2446F34DF3BFC6E2D1BE1D10FE", 00:12:04.211 "uuid": "97e4ef24-46f3-4df3-bfc6-e2d1be1d10fe" 00:12:04.211 } 00:12:04.211 ] 00:12:04.211 } 00:12:04.211 ] 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3623821 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:04.211 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:04.211 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.470 [2024-07-15 21:49:58.483683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:04.470 Malloc4 00:12:04.470 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:04.470 [2024-07-15 21:49:58.694285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:04.730 Asynchronous Event Request test 00:12:04.730 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:04.730 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:04.730 Registering asynchronous event callbacks... 00:12:04.730 Starting namespace attribute notice tests for all controllers... 00:12:04.730 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:04.730 aer_cb - Changed Namespace 00:12:04.730 Cleaning up... 00:12:04.730 [ 00:12:04.730 { 00:12:04.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:04.730 "subtype": "Discovery", 00:12:04.730 "listen_addresses": [], 00:12:04.730 "allow_any_host": true, 00:12:04.730 "hosts": [] 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:04.730 "subtype": "NVMe", 00:12:04.730 "listen_addresses": [ 00:12:04.730 { 00:12:04.730 "trtype": "VFIOUSER", 00:12:04.730 "adrfam": "IPv4", 00:12:04.730 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:04.730 "trsvcid": "0" 00:12:04.730 } 00:12:04.730 ], 00:12:04.730 "allow_any_host": true, 00:12:04.730 "hosts": [], 00:12:04.730 "serial_number": "SPDK1", 00:12:04.730 "model_number": "SPDK bdev Controller", 00:12:04.730 "max_namespaces": 32, 00:12:04.730 "min_cntlid": 1, 00:12:04.730 "max_cntlid": 65519, 00:12:04.730 "namespaces": [ 00:12:04.730 { 00:12:04.730 "nsid": 1, 00:12:04.730 "bdev_name": "Malloc1", 00:12:04.730 "name": "Malloc1", 00:12:04.730 "nguid": "9BB62CD82BA14884BA79DB8F5977861E", 00:12:04.730 "uuid": "9bb62cd8-2ba1-4884-ba79-db8f5977861e" 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "nsid": 2, 00:12:04.730 "bdev_name": "Malloc3", 00:12:04.730 "name": "Malloc3", 00:12:04.730 "nguid": "7884966455BF4FD2B5BF1D134E39D6AB", 00:12:04.730 "uuid": "78849664-55bf-4fd2-b5bf-1d134e39d6ab" 00:12:04.730 } 00:12:04.730 ] 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:04.730 "subtype": "NVMe", 00:12:04.730 "listen_addresses": [ 00:12:04.730 { 00:12:04.730 "trtype": "VFIOUSER", 00:12:04.730 "adrfam": "IPv4", 00:12:04.730 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:04.730 "trsvcid": "0" 00:12:04.730 } 00:12:04.730 ], 00:12:04.730 "allow_any_host": true, 00:12:04.730 "hosts": [], 00:12:04.730 "serial_number": "SPDK2", 00:12:04.730 "model_number": "SPDK bdev Controller", 00:12:04.730 "max_namespaces": 32, 00:12:04.730 "min_cntlid": 1, 00:12:04.730 "max_cntlid": 65519, 00:12:04.730 "namespaces": [ 00:12:04.730 { 00:12:04.730 "nsid": 1, 00:12:04.730 "bdev_name": "Malloc2", 00:12:04.730 "name": "Malloc2", 00:12:04.730 "nguid": "97E4EF2446F34DF3BFC6E2D1BE1D10FE", 00:12:04.730 "uuid": "97e4ef24-46f3-4df3-bfc6-e2d1be1d10fe" 00:12:04.730 }, 00:12:04.730 { 00:12:04.730 "nsid": 2, 00:12:04.730 "bdev_name": "Malloc4", 00:12:04.730 "name": "Malloc4", 00:12:04.730 "nguid": "762F9FAFC68B41088B7AC37935A4ACA8", 00:12:04.730 "uuid": "762f9faf-c68b-4108-8b7a-c37935a4aca8" 00:12:04.730 } 00:12:04.730 ] 00:12:04.730 } 00:12:04.730 ] 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3623821 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3616029 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3616029 ']' 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3616029 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3616029 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3616029' 00:12:04.730 killing process with pid 3616029 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3616029 00:12:04.730 21:49:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3616029 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3623894 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3623894' 00:12:04.989 Process pid: 3623894 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3623894 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3623894 ']' 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.989 21:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:05.249 [2024-07-15 21:49:59.245868] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:05.249 [2024-07-15 21:49:59.246718] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:05.249 [2024-07-15 21:49:59.246757] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.249 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.249 [2024-07-15 21:49:59.300761] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.249 [2024-07-15 21:49:59.369030] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.249 [2024-07-15 21:49:59.369072] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.249 [2024-07-15 21:49:59.369082] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.249 [2024-07-15 21:49:59.369088] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.249 [2024-07-15 21:49:59.369092] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.249 [2024-07-15 21:49:59.369185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.249 [2024-07-15 21:49:59.369323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.249 [2024-07-15 21:49:59.369345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.249 [2024-07-15 21:49:59.369347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.249 [2024-07-15 21:49:59.455090] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:05.249 [2024-07-15 21:49:59.455237] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:05.249 [2024-07-15 21:49:59.455449] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:05.249 [2024-07-15 21:49:59.455761] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:05.249 [2024-07-15 21:49:59.455983] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:06.187 21:50:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.187 21:50:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:06.187 21:50:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:07.125 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.383 Malloc1 00:12:07.383 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:07.383 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:07.642 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:07.901 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.901 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:07.901 21:50:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:08.159 Malloc2 00:12:08.159 21:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:08.159 21:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:08.419 21:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3623894 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3623894 ']' 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3623894 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3623894 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3623894' 00:12:08.678 killing process with pid 3623894 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3623894 00:12:08.678 21:50:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3623894 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:08.937 00:12:08.937 real 0m51.381s 00:12:08.937 user 3m23.404s 00:12:08.937 sys 0m3.679s 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:08.937 ************************************ 00:12:08.937 END TEST nvmf_vfio_user 00:12:08.937 ************************************ 00:12:08.937 21:50:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.937 21:50:03 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:08.937 21:50:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.937 21:50:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.937 21:50:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.937 ************************************ 00:12:08.937 START TEST nvmf_vfio_user_nvme_compliance 00:12:08.937 ************************************ 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:08.937 * Looking for test storage... 00:12:08.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3624653 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3624653' 00:12:08.937 Process pid: 3624653 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3624653 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3624653 ']' 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:08.937 21:50:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:09.197 [2024-07-15 21:50:03.216379] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:09.197 [2024-07-15 21:50:03.216431] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.197 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.197 [2024-07-15 21:50:03.272147] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.197 [2024-07-15 21:50:03.349051] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.197 [2024-07-15 21:50:03.349090] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.197 [2024-07-15 21:50:03.349097] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.197 [2024-07-15 21:50:03.349103] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.197 [2024-07-15 21:50:03.349108] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.197 [2024-07-15 21:50:03.349151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.197 [2024-07-15 21:50:03.349257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.197 [2024-07-15 21:50:03.349259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.134 21:50:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.134 21:50:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:10.134 21:50:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.072 malloc0 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.072 21:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:11.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.072 00:12:11.072 00:12:11.072 CUnit - A unit testing framework for C - Version 2.1-3 00:12:11.072 http://cunit.sourceforge.net/ 00:12:11.072 00:12:11.072 00:12:11.072 Suite: nvme_compliance 00:12:11.072 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 21:50:05.234797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.072 [2024-07-15 21:50:05.236137] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:11.072 [2024-07-15 21:50:05.236152] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:11.072 [2024-07-15 21:50:05.236159] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:11.072 [2024-07-15 21:50:05.239829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.072 passed 00:12:11.331 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 21:50:05.322401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.331 [2024-07-15 21:50:05.325416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.331 passed 00:12:11.331 Test: admin_identify_ns ...[2024-07-15 21:50:05.408399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.331 [2024-07-15 21:50:05.469236] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:11.331 [2024-07-15 21:50:05.477236] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:11.331 [2024-07-15 21:50:05.498323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.331 passed 00:12:11.590 Test: admin_get_features_mandatory_features ...[2024-07-15 21:50:05.575886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.590 [2024-07-15 21:50:05.581916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.590 passed 00:12:11.590 Test: admin_get_features_optional_features ...[2024-07-15 21:50:05.660418] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.590 [2024-07-15 21:50:05.663435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.590 passed 00:12:11.590 Test: admin_set_features_number_of_queues ...[2024-07-15 21:50:05.745410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.849 [2024-07-15 21:50:05.850316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.849 passed 00:12:11.849 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 21:50:05.929686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.849 [2024-07-15 21:50:05.932711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:11.849 passed 00:12:11.849 Test: admin_get_log_page_with_lpo ...[2024-07-15 21:50:06.011478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:11.849 [2024-07-15 21:50:06.079238] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:12.108 [2024-07-15 21:50:06.092283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.108 passed 00:12:12.108 Test: fabric_property_get ...[2024-07-15 21:50:06.170660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.108 [2024-07-15 21:50:06.171891] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:12.108 [2024-07-15 21:50:06.173685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.108 passed 00:12:12.108 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 21:50:06.255216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.108 [2024-07-15 21:50:06.256462] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:12.108 [2024-07-15 21:50:06.258249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.108 passed 00:12:12.108 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 21:50:06.337207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.366 [2024-07-15 21:50:06.421245] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:12.366 [2024-07-15 21:50:06.437231] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:12.366 [2024-07-15 21:50:06.442320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.366 passed 00:12:12.366 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 21:50:06.524664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.366 [2024-07-15 21:50:06.525897] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:12.366 [2024-07-15 21:50:06.527687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.366 passed 00:12:12.366 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 21:50:06.606477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.625 [2024-07-15 21:50:06.683233] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:12.625 [2024-07-15 21:50:06.707233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:12.625 [2024-07-15 21:50:06.712310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.625 passed 00:12:12.625 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 21:50:06.785615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.625 [2024-07-15 21:50:06.786850] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:12.625 [2024-07-15 21:50:06.786873] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:12.625 [2024-07-15 21:50:06.788635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.625 passed 00:12:12.884 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 21:50:06.872434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.884 [2024-07-15 21:50:06.963231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:12.884 [2024-07-15 21:50:06.971231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:12.884 [2024-07-15 21:50:06.979239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:12.884 [2024-07-15 21:50:06.987234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:12.884 [2024-07-15 21:50:07.016306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:12.884 passed 00:12:12.884 Test: admin_create_io_sq_verify_pc ...[2024-07-15 21:50:07.092610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:12.884 [2024-07-15 21:50:07.109238] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:13.143 [2024-07-15 21:50:07.126773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:13.143 passed 00:12:13.143 Test: admin_create_io_qp_max_qps ...[2024-07-15 21:50:07.209318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:14.116 [2024-07-15 21:50:08.308232] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:14.694 [2024-07-15 21:50:08.688228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:14.694 passed 00:12:14.694 Test: admin_create_io_sq_shared_cq ...[2024-07-15 21:50:08.770235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:14.694 [2024-07-15 21:50:08.900234] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:14.953 [2024-07-15 21:50:08.937286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:14.953 passed 00:12:14.953 00:12:14.953 Run Summary: Type Total Ran Passed Failed Inactive 00:12:14.953 suites 1 1 n/a 0 0 00:12:14.953 tests 18 18 18 0 0 00:12:14.953 asserts 360 360 360 0 n/a 00:12:14.953 00:12:14.953 Elapsed time = 1.526 seconds 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3624653 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3624653 ']' 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3624653 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.953 21:50:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3624653 00:12:14.953 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:14.953 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:14.953 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3624653' 00:12:14.953 killing process with pid 3624653 00:12:14.953 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3624653 00:12:14.953 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3624653 00:12:15.212 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:15.212 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:15.212 00:12:15.212 real 0m6.142s 00:12:15.212 user 0m17.617s 00:12:15.212 sys 0m0.452s 00:12:15.212 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.212 21:50:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.212 ************************************ 00:12:15.212 END TEST nvmf_vfio_user_nvme_compliance 00:12:15.212 ************************************ 00:12:15.212 21:50:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.212 21:50:09 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:15.212 21:50:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.212 21:50:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.213 21:50:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.213 ************************************ 00:12:15.213 START TEST nvmf_vfio_user_fuzz 00:12:15.213 ************************************ 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:15.213 * Looking for test storage... 00:12:15.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3625770 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3625770' 00:12:15.213 Process pid: 3625770 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3625770 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3625770 ']' 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.213 21:50:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:16.148 21:50:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.148 21:50:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:16.148 21:50:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.083 malloc0 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:17.083 21:50:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:49.163 Fuzzing completed. Shutting down the fuzz application 00:12:49.163 00:12:49.163 Dumping successful admin opcodes: 00:12:49.163 8, 9, 10, 24, 00:12:49.163 Dumping successful io opcodes: 00:12:49.163 0, 00:12:49.163 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1003199, total successful commands: 3932, random_seed: 330353664 00:12:49.163 NS: 0x200003a1ef00 admin qp, Total commands completed: 247660, total successful commands: 2002, random_seed: 1884264768 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3625770 ']' 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3625770' 00:12:49.163 killing process with pid 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3625770 00:12:49.163 21:50:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:49.163 21:50:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:49.163 00:12:49.163 real 0m32.760s 00:12:49.163 user 0m30.345s 00:12:49.163 sys 0m31.098s 00:12:49.163 21:50:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.163 21:50:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:49.163 ************************************ 00:12:49.163 END TEST nvmf_vfio_user_fuzz 00:12:49.163 ************************************ 00:12:49.163 21:50:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.163 21:50:42 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:49.163 21:50:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.163 21:50:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.163 21:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.163 ************************************ 00:12:49.163 START TEST nvmf_host_management 00:12:49.163 ************************************ 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:49.163 * Looking for test storage... 00:12:49.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:49.163 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.164 21:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.376 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:53.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:53.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:53.377 Found net devices under 0000:86:00.0: cvl_0_0 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:53.377 Found net devices under 0000:86:00.1: cvl_0_1 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:12:53.377 00:12:53.377 --- 10.0.0.2 ping statistics --- 00:12:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.377 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:12:53.377 00:12:53.377 --- 10.0.0.1 ping statistics --- 00:12:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.377 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3634167 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3634167 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3634167 ']' 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:53.377 21:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:53.377 [2024-07-15 21:50:47.400486] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:53.377 [2024-07-15 21:50:47.400529] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.377 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.377 [2024-07-15 21:50:47.457802] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.378 [2024-07-15 21:50:47.539115] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.378 [2024-07-15 21:50:47.539151] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.378 [2024-07-15 21:50:47.539158] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.378 [2024-07-15 21:50:47.539164] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.378 [2024-07-15 21:50:47.539169] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.378 [2024-07-15 21:50:47.539205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.378 [2024-07-15 21:50:47.539227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.378 [2024-07-15 21:50:47.539334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.378 [2024-07-15 21:50:47.539334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.315 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.315 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:54.315 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.315 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 [2024-07-15 21:50:48.260154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 Malloc0 00:12:54.316 [2024-07-15 21:50:48.319989] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3634432 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3634432 /var/tmp/bdevperf.sock 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3634432 ']' 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:54.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:54.316 { 00:12:54.316 "params": { 00:12:54.316 "name": "Nvme$subsystem", 00:12:54.316 "trtype": "$TEST_TRANSPORT", 00:12:54.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.316 "adrfam": "ipv4", 00:12:54.316 "trsvcid": "$NVMF_PORT", 00:12:54.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.316 "hdgst": ${hdgst:-false}, 00:12:54.316 "ddgst": ${ddgst:-false} 00:12:54.316 }, 00:12:54.316 "method": "bdev_nvme_attach_controller" 00:12:54.316 } 00:12:54.316 EOF 00:12:54.316 )") 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:54.316 21:50:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:54.316 "params": { 00:12:54.316 "name": "Nvme0", 00:12:54.316 "trtype": "tcp", 00:12:54.316 "traddr": "10.0.0.2", 00:12:54.316 "adrfam": "ipv4", 00:12:54.316 "trsvcid": "4420", 00:12:54.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:54.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:54.316 "hdgst": false, 00:12:54.316 "ddgst": false 00:12:54.316 }, 00:12:54.316 "method": "bdev_nvme_attach_controller" 00:12:54.316 }' 00:12:54.316 [2024-07-15 21:50:48.412121] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:54.316 [2024-07-15 21:50:48.412164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634432 ] 00:12:54.316 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.316 [2024-07-15 21:50:48.467099] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.316 [2024-07-15 21:50:48.540114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.575 Running I/O for 10 seconds... 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:55.145 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=904 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 904 -ge 100 ']' 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.146 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.146 [2024-07-15 21:50:49.311391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2b720 is same with the state(5) to be set 00:12:55.146 [2024-07-15 21:50:49.311802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.311987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.146 [2024-07-15 21:50:49.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.146 [2024-07-15 21:50:49.312288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.147 [2024-07-15 21:50:49.312850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.147 [2024-07-15 21:50:49.312858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.312867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.148 [2024-07-15 21:50:49.312875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.312883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.148 [2024-07-15 21:50:49.312890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.312899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.148 [2024-07-15 21:50:49.312906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.312915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.148 [2024-07-15 21:50:49.312922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.312931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.148 [2024-07-15 21:50:49.312938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.313002] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x208b7e0 was disconnected and freed. reset controller. 00:12:55.148 [2024-07-15 21:50:49.313926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:55.148 task offset: 128128 on job bdev=Nvme0n1 fails 00:12:55.148 00:12:55.148 Latency(us) 00:12:55.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.148 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:55.148 Job: Nvme0n1 ended in about 0.58 seconds with error 00:12:55.148 Verification LBA range: start 0x0 length 0x400 00:12:55.148 Nvme0n1 : 0.58 1667.95 104.25 110.62 0.00 35250.90 1688.26 33280.89 00:12:55.148 =================================================================================================================== 00:12:55.148 Total : 1667.95 104.25 110.62 0.00 35250.90 1688.26 33280.89 00:12:55.148 [2024-07-15 21:50:49.315540] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:55.148 [2024-07-15 21:50:49.315559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c59ad0 (9): Bad file descriptor 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.148 [2024-07-15 21:50:49.322945] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:55.148 [2024-07-15 21:50:49.323041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:55.148 [2024-07-15 21:50:49.323065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.148 [2024-07-15 21:50:49.323079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:55.148 [2024-07-15 21:50:49.323087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:55.148 [2024-07-15 21:50:49.323095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:55.148 [2024-07-15 21:50:49.323101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c59ad0 00:12:55.148 [2024-07-15 21:50:49.323120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c59ad0 (9): Bad file descriptor 00:12:55.148 [2024-07-15 21:50:49.323133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:55.148 [2024-07-15 21:50:49.323140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:55.148 [2024-07-15 21:50:49.323148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:55.148 [2024-07-15 21:50:49.323161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.148 21:50:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3634432 00:12:56.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3634432) - No such process 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:56.527 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:56.527 { 00:12:56.527 "params": { 00:12:56.528 "name": "Nvme$subsystem", 00:12:56.528 "trtype": "$TEST_TRANSPORT", 00:12:56.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.528 "adrfam": "ipv4", 00:12:56.528 "trsvcid": "$NVMF_PORT", 00:12:56.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.528 "hdgst": ${hdgst:-false}, 00:12:56.528 "ddgst": ${ddgst:-false} 00:12:56.528 }, 00:12:56.528 "method": "bdev_nvme_attach_controller" 00:12:56.528 } 00:12:56.528 EOF 00:12:56.528 )") 00:12:56.528 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:56.528 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:56.528 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:56.528 21:50:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:56.528 "params": { 00:12:56.528 "name": "Nvme0", 00:12:56.528 "trtype": "tcp", 00:12:56.528 "traddr": "10.0.0.2", 00:12:56.528 "adrfam": "ipv4", 00:12:56.528 "trsvcid": "4420", 00:12:56.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:56.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:56.528 "hdgst": false, 00:12:56.528 "ddgst": false 00:12:56.528 }, 00:12:56.528 "method": "bdev_nvme_attach_controller" 00:12:56.528 }' 00:12:56.528 [2024-07-15 21:50:50.380172] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:12:56.528 [2024-07-15 21:50:50.380220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634689 ] 00:12:56.528 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.528 [2024-07-15 21:50:50.434970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.528 [2024-07-15 21:50:50.509748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.528 Running I/O for 1 seconds... 00:12:57.908 00:12:57.908 Latency(us) 00:12:57.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.908 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:57.908 Verification LBA range: start 0x0 length 0x400 00:12:57.908 Nvme0n1 : 1.03 1739.61 108.73 0.00 0.00 36189.15 5214.39 33508.84 00:12:57.908 =================================================================================================================== 00:12:57.908 Total : 1739.61 108.73 0.00 0.00 36189.15 5214.39 33508.84 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.908 rmmod nvme_tcp 00:12:57.908 rmmod nvme_fabrics 00:12:57.908 rmmod nvme_keyring 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.908 21:50:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:57.908 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:57.908 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3634167 ']' 00:12:57.908 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3634167 00:12:57.908 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3634167 ']' 00:12:57.908 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3634167 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3634167 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3634167' 00:12:57.909 killing process with pid 3634167 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3634167 00:12:57.909 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3634167 00:12:58.167 [2024-07-15 21:50:52.229846] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.167 21:50:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.168 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.168 21:50:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.132 21:50:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.132 21:50:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:00.132 00:13:00.132 real 0m12.213s 00:13:00.132 user 0m22.673s 00:13:00.132 sys 0m5.034s 00:13:00.132 21:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.132 21:50:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:00.132 ************************************ 00:13:00.132 END TEST nvmf_host_management 00:13:00.132 ************************************ 00:13:00.132 21:50:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.132 21:50:54 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:00.132 21:50:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.132 21:50:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.132 21:50:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.392 ************************************ 00:13:00.392 START TEST nvmf_lvol 00:13:00.392 ************************************ 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:00.392 * Looking for test storage... 00:13:00.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.392 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.393 21:50:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:05.670 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:05.670 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:05.670 Found net devices under 0000:86:00.0: cvl_0_0 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:05.670 Found net devices under 0000:86:00.1: cvl_0_1 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.670 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.929 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.929 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.929 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:05.929 21:50:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:05.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:05.929 00:13:05.929 --- 10.0.0.2 ping statistics --- 00:13:05.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.929 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:13:05.929 00:13:05.929 --- 10.0.0.1 ping statistics --- 00:13:05.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.929 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3638457 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3638457 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3638457 ']' 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.929 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 [2024-07-15 21:51:00.159666] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:05.929 [2024-07-15 21:51:00.159708] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.187 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.187 [2024-07-15 21:51:00.220756] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.187 [2024-07-15 21:51:00.297994] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.187 [2024-07-15 21:51:00.298035] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.187 [2024-07-15 21:51:00.298043] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.187 [2024-07-15 21:51:00.298052] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.187 [2024-07-15 21:51:00.298059] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.187 [2024-07-15 21:51:00.298107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.187 [2024-07-15 21:51:00.298203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.187 [2024-07-15 21:51:00.298204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.754 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.754 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:06.754 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.754 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.754 21:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.011 21:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.011 21:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:07.011 [2024-07-15 21:51:01.151892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.011 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.268 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:07.268 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.578 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:07.578 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:07.578 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:07.836 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e442079-95a6-4993-ac5f-5ed20e7d4697 00:13:07.836 21:51:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e442079-95a6-4993-ac5f-5ed20e7d4697 lvol 20 00:13:08.093 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0638dc8b-e656-4b4c-b8ca-e465581daf22 00:13:08.094 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:08.094 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0638dc8b-e656-4b4c-b8ca-e465581daf22 00:13:08.351 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:08.610 [2024-07-15 21:51:02.662779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.610 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:08.867 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3639060 00:13:08.867 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:08.867 21:51:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:08.867 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.802 21:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0638dc8b-e656-4b4c-b8ca-e465581daf22 MY_SNAPSHOT 00:13:10.061 21:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=60a078c0-2773-42ca-a482-d0e16a12d00e 00:13:10.061 21:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0638dc8b-e656-4b4c-b8ca-e465581daf22 30 00:13:10.319 21:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 60a078c0-2773-42ca-a482-d0e16a12d00e MY_CLONE 00:13:10.577 21:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b0d971de-cd46-4a2d-b89d-0174fbb4024c 00:13:10.577 21:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b0d971de-cd46-4a2d-b89d-0174fbb4024c 00:13:11.144 21:51:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3639060 00:13:19.261 Initializing NVMe Controllers 00:13:19.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:19.261 Controller IO queue size 128, less than required. 00:13:19.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:19.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:19.261 Initialization complete. Launching workers. 00:13:19.261 ======================================================== 00:13:19.261 Latency(us) 00:13:19.261 Device Information : IOPS MiB/s Average min max 00:13:19.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12268.70 47.92 10440.27 1428.74 101916.69 00:13:19.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12233.70 47.79 10464.81 3657.90 41787.59 00:13:19.261 ======================================================== 00:13:19.261 Total : 24502.40 95.71 10452.52 1428.74 101916.69 00:13:19.261 00:13:19.261 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.261 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0638dc8b-e656-4b4c-b8ca-e465581daf22 00:13:19.520 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e442079-95a6-4993-ac5f-5ed20e7d4697 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.779 rmmod nvme_tcp 00:13:19.779 rmmod nvme_fabrics 00:13:19.779 rmmod nvme_keyring 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3638457 ']' 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3638457 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3638457 ']' 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3638457 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638457 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638457' 00:13:19.779 killing process with pid 3638457 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3638457 00:13:19.779 21:51:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3638457 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.038 21:51:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.945 21:51:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.945 00:13:21.945 real 0m21.790s 00:13:21.945 user 1m4.082s 00:13:21.945 sys 0m6.874s 00:13:21.945 21:51:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.945 21:51:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.945 ************************************ 00:13:21.945 END TEST nvmf_lvol 00:13:21.945 ************************************ 00:13:22.204 21:51:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:22.204 21:51:16 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:22.204 21:51:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.204 21:51:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.204 21:51:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.204 ************************************ 00:13:22.204 START TEST nvmf_lvs_grow 00:13:22.204 ************************************ 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:22.205 * Looking for test storage... 00:13:22.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.205 21:51:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:27.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:27.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:27.530 Found net devices under 0000:86:00.0: cvl_0_0 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:27.530 Found net devices under 0000:86:00.1: cvl_0_1 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.530 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:13:27.531 00:13:27.531 --- 10.0.0.2 ping statistics --- 00:13:27.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.531 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:13:27.531 00:13:27.531 --- 10.0.0.1 ping statistics --- 00:13:27.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.531 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3644804 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3644804 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3644804 ']' 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.531 21:51:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.789 [2024-07-15 21:51:21.783046] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:27.789 [2024-07-15 21:51:21.783087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.789 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.789 [2024-07-15 21:51:21.841350] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.789 [2024-07-15 21:51:21.913542] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.789 [2024-07-15 21:51:21.913585] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.789 [2024-07-15 21:51:21.913592] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.789 [2024-07-15 21:51:21.913598] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.789 [2024-07-15 21:51:21.913602] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.789 [2024-07-15 21:51:21.913621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.354 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.354 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:28.354 21:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.354 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.354 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:28.612 [2024-07-15 21:51:22.769360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.612 ************************************ 00:13:28.612 START TEST lvs_grow_clean 00:13:28.612 ************************************ 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:28.612 21:51:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.871 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:28.871 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:29.130 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 lvol 150 00:13:29.388 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=456d473e-9f67-4d41-905f-c18501cb10cd 00:13:29.388 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.388 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:29.647 [2024-07-15 21:51:23.695703] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:29.647 [2024-07-15 21:51:23.695752] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:29.647 true 00:13:29.647 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:29.647 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:29.906 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:29.906 21:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:29.906 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 456d473e-9f67-4d41-905f-c18501cb10cd 00:13:30.165 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:30.165 [2024-07-15 21:51:24.373758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.165 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3645307 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3645307 /var/tmp/bdevperf.sock 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3645307 ']' 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.425 21:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:30.425 [2024-07-15 21:51:24.604917] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:30.425 [2024-07-15 21:51:24.604963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645307 ] 00:13:30.425 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.425 [2024-07-15 21:51:24.659074] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.685 [2024-07-15 21:51:24.731196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.253 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.253 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:31.253 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:31.513 Nvme0n1 00:13:31.513 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:31.772 [ 00:13:31.772 { 00:13:31.772 "name": "Nvme0n1", 00:13:31.772 "aliases": [ 00:13:31.772 "456d473e-9f67-4d41-905f-c18501cb10cd" 00:13:31.772 ], 00:13:31.772 "product_name": "NVMe disk", 00:13:31.772 "block_size": 4096, 00:13:31.772 "num_blocks": 38912, 00:13:31.772 "uuid": "456d473e-9f67-4d41-905f-c18501cb10cd", 00:13:31.772 "assigned_rate_limits": { 00:13:31.772 "rw_ios_per_sec": 0, 00:13:31.772 "rw_mbytes_per_sec": 0, 00:13:31.772 "r_mbytes_per_sec": 0, 00:13:31.772 "w_mbytes_per_sec": 0 00:13:31.772 }, 00:13:31.772 "claimed": false, 00:13:31.772 "zoned": false, 00:13:31.772 "supported_io_types": { 00:13:31.772 "read": true, 00:13:31.772 "write": true, 00:13:31.772 "unmap": true, 00:13:31.772 "flush": true, 00:13:31.772 "reset": true, 00:13:31.772 "nvme_admin": true, 00:13:31.772 "nvme_io": true, 00:13:31.772 "nvme_io_md": false, 00:13:31.772 "write_zeroes": true, 00:13:31.772 "zcopy": false, 00:13:31.772 "get_zone_info": false, 00:13:31.772 "zone_management": false, 00:13:31.772 "zone_append": false, 00:13:31.772 "compare": true, 00:13:31.772 "compare_and_write": true, 00:13:31.772 "abort": true, 00:13:31.772 "seek_hole": false, 00:13:31.772 "seek_data": false, 00:13:31.772 "copy": true, 00:13:31.772 "nvme_iov_md": false 00:13:31.772 }, 00:13:31.772 "memory_domains": [ 00:13:31.772 { 00:13:31.772 "dma_device_id": "system", 00:13:31.772 "dma_device_type": 1 00:13:31.772 } 00:13:31.772 ], 00:13:31.772 "driver_specific": { 00:13:31.772 "nvme": [ 00:13:31.772 { 00:13:31.772 "trid": { 00:13:31.772 "trtype": "TCP", 00:13:31.772 "adrfam": "IPv4", 00:13:31.772 "traddr": "10.0.0.2", 00:13:31.772 "trsvcid": "4420", 00:13:31.772 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:31.772 }, 00:13:31.772 "ctrlr_data": { 00:13:31.772 "cntlid": 1, 00:13:31.772 "vendor_id": "0x8086", 00:13:31.772 "model_number": "SPDK bdev Controller", 00:13:31.772 "serial_number": "SPDK0", 00:13:31.772 "firmware_revision": "24.09", 00:13:31.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:31.772 "oacs": { 00:13:31.772 "security": 0, 00:13:31.772 "format": 0, 00:13:31.772 "firmware": 0, 00:13:31.772 "ns_manage": 0 00:13:31.772 }, 00:13:31.772 "multi_ctrlr": true, 00:13:31.772 "ana_reporting": false 00:13:31.772 }, 00:13:31.772 "vs": { 00:13:31.772 "nvme_version": "1.3" 00:13:31.772 }, 00:13:31.772 "ns_data": { 00:13:31.772 "id": 1, 00:13:31.772 "can_share": true 00:13:31.772 } 00:13:31.772 } 00:13:31.772 ], 00:13:31.772 "mp_policy": "active_passive" 00:13:31.772 } 00:13:31.772 } 00:13:31.772 ] 00:13:31.772 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3645546 00:13:31.772 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:31.772 21:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:31.772 Running I/O for 10 seconds... 00:13:33.150 Latency(us) 00:13:33.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.150 Nvme0n1 : 1.00 21894.00 85.52 0.00 0.00 0.00 0.00 0.00 00:13:33.150 =================================================================================================================== 00:13:33.150 Total : 21894.00 85.52 0.00 0.00 0.00 0.00 0.00 00:13:33.150 00:13:33.718 21:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:33.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.718 Nvme0n1 : 2.00 22027.00 86.04 0.00 0.00 0.00 0.00 0.00 00:13:33.718 =================================================================================================================== 00:13:33.718 Total : 22027.00 86.04 0.00 0.00 0.00 0.00 0.00 00:13:33.718 00:13:33.977 true 00:13:33.977 21:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:33.977 21:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.236 21:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:34.236 21:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:34.236 21:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3645546 00:13:34.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.805 Nvme0n1 : 3.00 22060.67 86.17 0.00 0.00 0.00 0.00 0.00 00:13:34.805 =================================================================================================================== 00:13:34.805 Total : 22060.67 86.17 0.00 0.00 0.00 0.00 0.00 00:13:34.805 00:13:35.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.742 Nvme0n1 : 4.00 22111.50 86.37 0.00 0.00 0.00 0.00 0.00 00:13:35.742 =================================================================================================================== 00:13:35.742 Total : 22111.50 86.37 0.00 0.00 0.00 0.00 0.00 00:13:35.742 00:13:37.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.120 Nvme0n1 : 5.00 22132.40 86.45 0.00 0.00 0.00 0.00 0.00 00:13:37.120 =================================================================================================================== 00:13:37.120 Total : 22132.40 86.45 0.00 0.00 0.00 0.00 0.00 00:13:37.120 00:13:38.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.058 Nvme0n1 : 6.00 22167.67 86.59 0.00 0.00 0.00 0.00 0.00 00:13:38.058 =================================================================================================================== 00:13:38.058 Total : 22167.67 86.59 0.00 0.00 0.00 0.00 0.00 00:13:38.058 00:13:38.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.995 Nvme0n1 : 7.00 22195.14 86.70 0.00 0.00 0.00 0.00 0.00 00:13:38.995 =================================================================================================================== 00:13:38.995 Total : 22195.14 86.70 0.00 0.00 0.00 0.00 0.00 00:13:38.995 00:13:39.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.933 Nvme0n1 : 8.00 22220.75 86.80 0.00 0.00 0.00 0.00 0.00 00:13:39.933 =================================================================================================================== 00:13:39.933 Total : 22220.75 86.80 0.00 0.00 0.00 0.00 0.00 00:13:39.933 00:13:40.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.870 Nvme0n1 : 9.00 22240.67 86.88 0.00 0.00 0.00 0.00 0.00 00:13:40.870 =================================================================================================================== 00:13:40.870 Total : 22240.67 86.88 0.00 0.00 0.00 0.00 0.00 00:13:40.870 00:13:41.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.843 Nvme0n1 : 10.00 22245.40 86.90 0.00 0.00 0.00 0.00 0.00 00:13:41.843 =================================================================================================================== 00:13:41.843 Total : 22245.40 86.90 0.00 0.00 0.00 0.00 0.00 00:13:41.843 00:13:41.843 00:13:41.843 Latency(us) 00:13:41.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.843 Nvme0n1 : 10.01 22245.56 86.90 0.00 0.00 5749.84 4046.14 9744.92 00:13:41.843 =================================================================================================================== 00:13:41.843 Total : 22245.56 86.90 0.00 0.00 5749.84 4046.14 9744.92 00:13:41.843 0 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3645307 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3645307 ']' 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3645307 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3645307 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3645307' 00:13:41.843 killing process with pid 3645307 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3645307 00:13:41.843 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.843 00:13:41.843 Latency(us) 00:13:41.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.843 =================================================================================================================== 00:13:41.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.843 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3645307 00:13:42.102 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.359 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.359 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:42.359 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:42.618 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:42.618 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:42.618 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:42.876 [2024-07-15 21:51:36.919814] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:42.876 21:51:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:43.135 request: 00:13:43.135 { 00:13:43.135 "uuid": "35e6efd3-7f42-4da8-9d27-031517acd8b8", 00:13:43.135 "method": "bdev_lvol_get_lvstores", 00:13:43.135 "req_id": 1 00:13:43.135 } 00:13:43.135 Got JSON-RPC error response 00:13:43.135 response: 00:13:43.135 { 00:13:43.135 "code": -19, 00:13:43.135 "message": "No such device" 00:13:43.135 } 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:43.135 aio_bdev 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 456d473e-9f67-4d41-905f-c18501cb10cd 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=456d473e-9f67-4d41-905f-c18501cb10cd 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.135 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:43.395 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 456d473e-9f67-4d41-905f-c18501cb10cd -t 2000 00:13:43.655 [ 00:13:43.655 { 00:13:43.655 "name": "456d473e-9f67-4d41-905f-c18501cb10cd", 00:13:43.655 "aliases": [ 00:13:43.655 "lvs/lvol" 00:13:43.655 ], 00:13:43.655 "product_name": "Logical Volume", 00:13:43.655 "block_size": 4096, 00:13:43.655 "num_blocks": 38912, 00:13:43.655 "uuid": "456d473e-9f67-4d41-905f-c18501cb10cd", 00:13:43.655 "assigned_rate_limits": { 00:13:43.655 "rw_ios_per_sec": 0, 00:13:43.655 "rw_mbytes_per_sec": 0, 00:13:43.655 "r_mbytes_per_sec": 0, 00:13:43.655 "w_mbytes_per_sec": 0 00:13:43.655 }, 00:13:43.655 "claimed": false, 00:13:43.655 "zoned": false, 00:13:43.655 "supported_io_types": { 00:13:43.655 "read": true, 00:13:43.655 "write": true, 00:13:43.655 "unmap": true, 00:13:43.655 "flush": false, 00:13:43.655 "reset": true, 00:13:43.655 "nvme_admin": false, 00:13:43.655 "nvme_io": false, 00:13:43.655 "nvme_io_md": false, 00:13:43.655 "write_zeroes": true, 00:13:43.655 "zcopy": false, 00:13:43.655 "get_zone_info": false, 00:13:43.655 "zone_management": false, 00:13:43.655 "zone_append": false, 00:13:43.655 "compare": false, 00:13:43.655 "compare_and_write": false, 00:13:43.655 "abort": false, 00:13:43.655 "seek_hole": true, 00:13:43.655 "seek_data": true, 00:13:43.655 "copy": false, 00:13:43.655 "nvme_iov_md": false 00:13:43.655 }, 00:13:43.655 "driver_specific": { 00:13:43.655 "lvol": { 00:13:43.655 "lvol_store_uuid": "35e6efd3-7f42-4da8-9d27-031517acd8b8", 00:13:43.655 "base_bdev": "aio_bdev", 00:13:43.655 "thin_provision": false, 00:13:43.655 "num_allocated_clusters": 38, 00:13:43.655 "snapshot": false, 00:13:43.655 "clone": false, 00:13:43.655 "esnap_clone": false 00:13:43.655 } 00:13:43.655 } 00:13:43.655 } 00:13:43.655 ] 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:43.655 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:43.915 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:43.915 21:51:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 456d473e-9f67-4d41-905f-c18501cb10cd 00:13:44.175 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35e6efd3-7f42-4da8-9d27-031517acd8b8 00:13:44.175 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.434 00:13:44.434 real 0m15.766s 00:13:44.434 user 0m15.417s 00:13:44.434 sys 0m1.462s 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:44.434 ************************************ 00:13:44.434 END TEST lvs_grow_clean 00:13:44.434 ************************************ 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.434 ************************************ 00:13:44.434 START TEST lvs_grow_dirty 00:13:44.434 ************************************ 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.434 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.694 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:44.694 21:51:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:44.957 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:44.957 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:44.957 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:44.957 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:44.957 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:45.216 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af lvol 150 00:13:45.216 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe351032-d9e9-4deb-9f62-cf681ae55f45 00:13:45.217 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.217 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:45.475 [2024-07-15 21:51:39.537638] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:45.475 [2024-07-15 21:51:39.537689] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:45.475 true 00:13:45.476 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:45.476 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:45.476 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:45.476 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:45.735 21:51:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe351032-d9e9-4deb-9f62-cf681ae55f45 00:13:45.994 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.994 [2024-07-15 21:51:40.215684] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.994 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3647908 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3647908 /var/tmp/bdevperf.sock 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3647908 ']' 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.253 21:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:46.253 [2024-07-15 21:51:40.445646] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:46.253 [2024-07-15 21:51:40.445692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647908 ] 00:13:46.253 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.513 [2024-07-15 21:51:40.500445] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.513 [2024-07-15 21:51:40.579442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.081 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.081 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:47.081 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:47.340 Nvme0n1 00:13:47.340 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:47.600 [ 00:13:47.600 { 00:13:47.600 "name": "Nvme0n1", 00:13:47.600 "aliases": [ 00:13:47.600 "fe351032-d9e9-4deb-9f62-cf681ae55f45" 00:13:47.600 ], 00:13:47.600 "product_name": "NVMe disk", 00:13:47.600 "block_size": 4096, 00:13:47.600 "num_blocks": 38912, 00:13:47.600 "uuid": "fe351032-d9e9-4deb-9f62-cf681ae55f45", 00:13:47.600 "assigned_rate_limits": { 00:13:47.600 "rw_ios_per_sec": 0, 00:13:47.600 "rw_mbytes_per_sec": 0, 00:13:47.600 "r_mbytes_per_sec": 0, 00:13:47.600 "w_mbytes_per_sec": 0 00:13:47.600 }, 00:13:47.600 "claimed": false, 00:13:47.600 "zoned": false, 00:13:47.600 "supported_io_types": { 00:13:47.600 "read": true, 00:13:47.600 "write": true, 00:13:47.600 "unmap": true, 00:13:47.600 "flush": true, 00:13:47.600 "reset": true, 00:13:47.600 "nvme_admin": true, 00:13:47.600 "nvme_io": true, 00:13:47.600 "nvme_io_md": false, 00:13:47.600 "write_zeroes": true, 00:13:47.600 "zcopy": false, 00:13:47.600 "get_zone_info": false, 00:13:47.600 "zone_management": false, 00:13:47.600 "zone_append": false, 00:13:47.600 "compare": true, 00:13:47.600 "compare_and_write": true, 00:13:47.600 "abort": true, 00:13:47.600 "seek_hole": false, 00:13:47.600 "seek_data": false, 00:13:47.600 "copy": true, 00:13:47.600 "nvme_iov_md": false 00:13:47.600 }, 00:13:47.600 "memory_domains": [ 00:13:47.600 { 00:13:47.600 "dma_device_id": "system", 00:13:47.600 "dma_device_type": 1 00:13:47.600 } 00:13:47.600 ], 00:13:47.600 "driver_specific": { 00:13:47.600 "nvme": [ 00:13:47.600 { 00:13:47.600 "trid": { 00:13:47.600 "trtype": "TCP", 00:13:47.600 "adrfam": "IPv4", 00:13:47.600 "traddr": "10.0.0.2", 00:13:47.600 "trsvcid": "4420", 00:13:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:47.600 }, 00:13:47.600 "ctrlr_data": { 00:13:47.600 "cntlid": 1, 00:13:47.600 "vendor_id": "0x8086", 00:13:47.600 "model_number": "SPDK bdev Controller", 00:13:47.600 "serial_number": "SPDK0", 00:13:47.600 "firmware_revision": "24.09", 00:13:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.600 "oacs": { 00:13:47.600 "security": 0, 00:13:47.600 "format": 0, 00:13:47.600 "firmware": 0, 00:13:47.600 "ns_manage": 0 00:13:47.600 }, 00:13:47.600 "multi_ctrlr": true, 00:13:47.600 "ana_reporting": false 00:13:47.601 }, 00:13:47.601 "vs": { 00:13:47.601 "nvme_version": "1.3" 00:13:47.601 }, 00:13:47.601 "ns_data": { 00:13:47.601 "id": 1, 00:13:47.601 "can_share": true 00:13:47.601 } 00:13:47.601 } 00:13:47.601 ], 00:13:47.601 "mp_policy": "active_passive" 00:13:47.601 } 00:13:47.601 } 00:13:47.601 ] 00:13:47.601 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3648139 00:13:47.601 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:47.601 21:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:47.601 Running I/O for 10 seconds... 00:13:48.539 Latency(us) 00:13:48.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.539 Nvme0n1 : 1.00 21686.00 84.71 0.00 0.00 0.00 0.00 0.00 00:13:48.539 =================================================================================================================== 00:13:48.539 Total : 21686.00 84.71 0.00 0.00 0.00 0.00 0.00 00:13:48.539 00:13:49.477 21:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:49.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.736 Nvme0n1 : 2.00 21831.00 85.28 0.00 0.00 0.00 0.00 0.00 00:13:49.736 =================================================================================================================== 00:13:49.736 Total : 21831.00 85.28 0.00 0.00 0.00 0.00 0.00 00:13:49.736 00:13:49.736 true 00:13:49.736 21:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:49.736 21:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:49.995 21:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:49.995 21:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:49.995 21:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3648139 00:13:50.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.563 Nvme0n1 : 3.00 21868.67 85.42 0.00 0.00 0.00 0.00 0.00 00:13:50.563 =================================================================================================================== 00:13:50.563 Total : 21868.67 85.42 0.00 0.00 0.00 0.00 0.00 00:13:50.563 00:13:51.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.941 Nvme0n1 : 4.00 21921.50 85.63 0.00 0.00 0.00 0.00 0.00 00:13:51.941 =================================================================================================================== 00:13:51.941 Total : 21921.50 85.63 0.00 0.00 0.00 0.00 0.00 00:13:51.941 00:13:52.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.878 Nvme0n1 : 5.00 21964.40 85.80 0.00 0.00 0.00 0.00 0.00 00:13:52.878 =================================================================================================================== 00:13:52.878 Total : 21964.40 85.80 0.00 0.00 0.00 0.00 0.00 00:13:52.878 00:13:53.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.813 Nvme0n1 : 6.00 22003.67 85.95 0.00 0.00 0.00 0.00 0.00 00:13:53.813 =================================================================================================================== 00:13:53.813 Total : 22003.67 85.95 0.00 0.00 0.00 0.00 0.00 00:13:53.813 00:13:54.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.747 Nvme0n1 : 7.00 22027.14 86.04 0.00 0.00 0.00 0.00 0.00 00:13:54.747 =================================================================================================================== 00:13:54.747 Total : 22027.14 86.04 0.00 0.00 0.00 0.00 0.00 00:13:54.747 00:13:55.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.686 Nvme0n1 : 8.00 22047.75 86.12 0.00 0.00 0.00 0.00 0.00 00:13:55.686 =================================================================================================================== 00:13:55.686 Total : 22047.75 86.12 0.00 0.00 0.00 0.00 0.00 00:13:55.686 00:13:56.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.624 Nvme0n1 : 9.00 22058.44 86.17 0.00 0.00 0.00 0.00 0.00 00:13:56.624 =================================================================================================================== 00:13:56.624 Total : 22058.44 86.17 0.00 0.00 0.00 0.00 0.00 00:13:56.624 00:13:57.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.627 Nvme0n1 : 10.00 22070.20 86.21 0.00 0.00 0.00 0.00 0.00 00:13:57.627 =================================================================================================================== 00:13:57.627 Total : 22070.20 86.21 0.00 0.00 0.00 0.00 0.00 00:13:57.627 00:13:57.627 00:13:57.627 Latency(us) 00:13:57.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.627 Nvme0n1 : 10.01 22070.07 86.21 0.00 0.00 5795.46 4530.53 11910.46 00:13:57.627 =================================================================================================================== 00:13:57.627 Total : 22070.07 86.21 0.00 0.00 5795.46 4530.53 11910.46 00:13:57.627 0 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3647908 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3647908 ']' 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3647908 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3647908 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3647908' 00:13:57.627 killing process with pid 3647908 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3647908 00:13:57.627 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.627 00:13:57.627 Latency(us) 00:13:57.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.627 =================================================================================================================== 00:13:57.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.627 21:51:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3647908 00:13:57.886 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.145 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3644804 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3644804 00:13:58.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3644804 Killed "${NVMF_APP[@]}" "$@" 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3649985 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3649985 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3649985 ']' 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.405 21:51:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:58.663 [2024-07-15 21:51:52.661899] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:13:58.663 [2024-07-15 21:51:52.661945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.663 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.663 [2024-07-15 21:51:52.718340] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.663 [2024-07-15 21:51:52.796772] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.663 [2024-07-15 21:51:52.796806] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.663 [2024-07-15 21:51:52.796813] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.663 [2024-07-15 21:51:52.796819] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.663 [2024-07-15 21:51:52.796824] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.663 [2024-07-15 21:51:52.796840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.231 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.231 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:59.231 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.231 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.231 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.491 [2024-07-15 21:51:53.658249] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:59.491 [2024-07-15 21:51:53.658333] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:59.491 [2024-07-15 21:51:53.658359] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fe351032-d9e9-4deb-9f62-cf681ae55f45 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fe351032-d9e9-4deb-9f62-cf681ae55f45 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.491 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:59.765 21:51:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe351032-d9e9-4deb-9f62-cf681ae55f45 -t 2000 00:13:59.765 [ 00:13:59.765 { 00:13:59.765 "name": "fe351032-d9e9-4deb-9f62-cf681ae55f45", 00:13:59.765 "aliases": [ 00:13:59.765 "lvs/lvol" 00:13:59.765 ], 00:13:59.765 "product_name": "Logical Volume", 00:13:59.765 "block_size": 4096, 00:13:59.765 "num_blocks": 38912, 00:13:59.765 "uuid": "fe351032-d9e9-4deb-9f62-cf681ae55f45", 00:13:59.765 "assigned_rate_limits": { 00:13:59.765 "rw_ios_per_sec": 0, 00:13:59.765 "rw_mbytes_per_sec": 0, 00:13:59.765 "r_mbytes_per_sec": 0, 00:13:59.765 "w_mbytes_per_sec": 0 00:13:59.765 }, 00:13:59.765 "claimed": false, 00:13:59.765 "zoned": false, 00:13:59.765 "supported_io_types": { 00:13:59.765 "read": true, 00:13:59.765 "write": true, 00:13:59.765 "unmap": true, 00:13:59.765 "flush": false, 00:13:59.765 "reset": true, 00:13:59.765 "nvme_admin": false, 00:13:59.765 "nvme_io": false, 00:13:59.765 "nvme_io_md": false, 00:13:59.765 "write_zeroes": true, 00:13:59.765 "zcopy": false, 00:13:59.765 "get_zone_info": false, 00:13:59.765 "zone_management": false, 00:13:59.765 "zone_append": false, 00:13:59.765 "compare": false, 00:13:59.765 "compare_and_write": false, 00:13:59.765 "abort": false, 00:13:59.765 "seek_hole": true, 00:13:59.765 "seek_data": true, 00:13:59.765 "copy": false, 00:13:59.765 "nvme_iov_md": false 00:13:59.765 }, 00:13:59.765 "driver_specific": { 00:13:59.765 "lvol": { 00:13:59.765 "lvol_store_uuid": "4b283bc5-d1b6-4e99-9a59-0bd3b65b51af", 00:13:59.765 "base_bdev": "aio_bdev", 00:13:59.765 "thin_provision": false, 00:13:59.765 "num_allocated_clusters": 38, 00:13:59.765 "snapshot": false, 00:13:59.765 "clone": false, 00:13:59.765 "esnap_clone": false 00:13:59.765 } 00:13:59.765 } 00:13:59.765 } 00:13:59.765 ] 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:00.025 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:00.283 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:00.284 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:00.284 [2024-07-15 21:51:54.518735] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:00.543 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:00.543 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:00.543 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:00.543 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.543 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:00.544 request: 00:14:00.544 { 00:14:00.544 "uuid": "4b283bc5-d1b6-4e99-9a59-0bd3b65b51af", 00:14:00.544 "method": "bdev_lvol_get_lvstores", 00:14:00.544 "req_id": 1 00:14:00.544 } 00:14:00.544 Got JSON-RPC error response 00:14:00.544 response: 00:14:00.544 { 00:14:00.544 "code": -19, 00:14:00.544 "message": "No such device" 00:14:00.544 } 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:00.544 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:00.803 aio_bdev 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe351032-d9e9-4deb-9f62-cf681ae55f45 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fe351032-d9e9-4deb-9f62-cf681ae55f45 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.803 21:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:01.061 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe351032-d9e9-4deb-9f62-cf681ae55f45 -t 2000 00:14:01.061 [ 00:14:01.061 { 00:14:01.061 "name": "fe351032-d9e9-4deb-9f62-cf681ae55f45", 00:14:01.061 "aliases": [ 00:14:01.061 "lvs/lvol" 00:14:01.061 ], 00:14:01.061 "product_name": "Logical Volume", 00:14:01.061 "block_size": 4096, 00:14:01.061 "num_blocks": 38912, 00:14:01.061 "uuid": "fe351032-d9e9-4deb-9f62-cf681ae55f45", 00:14:01.061 "assigned_rate_limits": { 00:14:01.061 "rw_ios_per_sec": 0, 00:14:01.061 "rw_mbytes_per_sec": 0, 00:14:01.061 "r_mbytes_per_sec": 0, 00:14:01.061 "w_mbytes_per_sec": 0 00:14:01.061 }, 00:14:01.061 "claimed": false, 00:14:01.061 "zoned": false, 00:14:01.061 "supported_io_types": { 00:14:01.061 "read": true, 00:14:01.061 "write": true, 00:14:01.061 "unmap": true, 00:14:01.061 "flush": false, 00:14:01.061 "reset": true, 00:14:01.061 "nvme_admin": false, 00:14:01.061 "nvme_io": false, 00:14:01.061 "nvme_io_md": false, 00:14:01.061 "write_zeroes": true, 00:14:01.061 "zcopy": false, 00:14:01.061 "get_zone_info": false, 00:14:01.061 "zone_management": false, 00:14:01.061 "zone_append": false, 00:14:01.061 "compare": false, 00:14:01.061 "compare_and_write": false, 00:14:01.061 "abort": false, 00:14:01.061 "seek_hole": true, 00:14:01.061 "seek_data": true, 00:14:01.061 "copy": false, 00:14:01.061 "nvme_iov_md": false 00:14:01.061 }, 00:14:01.061 "driver_specific": { 00:14:01.061 "lvol": { 00:14:01.061 "lvol_store_uuid": "4b283bc5-d1b6-4e99-9a59-0bd3b65b51af", 00:14:01.061 "base_bdev": "aio_bdev", 00:14:01.061 "thin_provision": false, 00:14:01.061 "num_allocated_clusters": 38, 00:14:01.061 "snapshot": false, 00:14:01.061 "clone": false, 00:14:01.061 "esnap_clone": false 00:14:01.061 } 00:14:01.061 } 00:14:01.061 } 00:14:01.061 ] 00:14:01.061 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:01.061 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:01.061 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:01.320 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:01.320 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:01.320 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:01.579 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:01.579 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe351032-d9e9-4deb-9f62-cf681ae55f45 00:14:01.579 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b283bc5-d1b6-4e99-9a59-0bd3b65b51af 00:14:01.838 21:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:02.098 00:14:02.098 real 0m17.520s 00:14:02.098 user 0m44.641s 00:14:02.098 sys 0m4.022s 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.098 ************************************ 00:14:02.098 END TEST lvs_grow_dirty 00:14:02.098 ************************************ 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:02.098 nvmf_trace.0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.098 rmmod nvme_tcp 00:14:02.098 rmmod nvme_fabrics 00:14:02.098 rmmod nvme_keyring 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3649985 ']' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3649985 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3649985 ']' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3649985 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.098 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3649985 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3649985' 00:14:02.357 killing process with pid 3649985 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3649985 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3649985 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.357 21:51:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.891 21:51:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.891 00:14:04.891 real 0m42.349s 00:14:04.891 user 1m5.840s 00:14:04.891 sys 0m9.940s 00:14:04.891 21:51:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.891 21:51:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:04.891 ************************************ 00:14:04.891 END TEST nvmf_lvs_grow 00:14:04.891 ************************************ 00:14:04.891 21:51:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:04.891 21:51:58 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:04.891 21:51:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.891 21:51:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.891 21:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.891 ************************************ 00:14:04.891 START TEST nvmf_bdev_io_wait 00:14:04.891 ************************************ 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:04.891 * Looking for test storage... 00:14:04.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:04.891 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.892 21:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:10.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:10.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:10.165 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:10.166 Found net devices under 0000:86:00.0: cvl_0_0 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:10.166 Found net devices under 0000:86:00.1: cvl_0_1 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:10.166 00:14:10.166 --- 10.0.0.2 ping statistics --- 00:14:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.166 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:14:10.166 00:14:10.166 --- 10.0.0.1 ping statistics --- 00:14:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.166 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3654024 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3654024 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3654024 ']' 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.166 21:52:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.166 [2024-07-15 21:52:03.868100] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:10.166 [2024-07-15 21:52:03.868140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.166 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.166 [2024-07-15 21:52:03.925464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.166 [2024-07-15 21:52:04.007293] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.166 [2024-07-15 21:52:04.007327] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.166 [2024-07-15 21:52:04.007335] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.166 [2024-07-15 21:52:04.007342] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.166 [2024-07-15 21:52:04.007347] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.166 [2024-07-15 21:52:04.007388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.166 [2024-07-15 21:52:04.007481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.166 [2024-07-15 21:52:04.007545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.166 [2024-07-15 21:52:04.007547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 [2024-07-15 21:52:04.777675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 Malloc0 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 [2024-07-15 21:52:04.845785] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3654168 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3654171 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.735 { 00:14:10.735 "params": { 00:14:10.735 "name": "Nvme$subsystem", 00:14:10.735 "trtype": "$TEST_TRANSPORT", 00:14:10.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.735 "adrfam": "ipv4", 00:14:10.735 "trsvcid": "$NVMF_PORT", 00:14:10.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.735 "hdgst": ${hdgst:-false}, 00:14:10.735 "ddgst": ${ddgst:-false} 00:14:10.735 }, 00:14:10.735 "method": "bdev_nvme_attach_controller" 00:14:10.735 } 00:14:10.735 EOF 00:14:10.735 )") 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3654174 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.735 { 00:14:10.735 "params": { 00:14:10.735 "name": "Nvme$subsystem", 00:14:10.735 "trtype": "$TEST_TRANSPORT", 00:14:10.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.735 "adrfam": "ipv4", 00:14:10.735 "trsvcid": "$NVMF_PORT", 00:14:10.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.735 "hdgst": ${hdgst:-false}, 00:14:10.735 "ddgst": ${ddgst:-false} 00:14:10.735 }, 00:14:10.735 "method": "bdev_nvme_attach_controller" 00:14:10.735 } 00:14:10.735 EOF 00:14:10.735 )") 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3654178 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:10.735 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.736 { 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme$subsystem", 00:14:10.736 "trtype": "$TEST_TRANSPORT", 00:14:10.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "$NVMF_PORT", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.736 "hdgst": ${hdgst:-false}, 00:14:10.736 "ddgst": ${ddgst:-false} 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 } 00:14:10.736 EOF 00:14:10.736 )") 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.736 { 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme$subsystem", 00:14:10.736 "trtype": "$TEST_TRANSPORT", 00:14:10.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "$NVMF_PORT", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.736 "hdgst": ${hdgst:-false}, 00:14:10.736 "ddgst": ${ddgst:-false} 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 } 00:14:10.736 EOF 00:14:10.736 )") 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3654168 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme1", 00:14:10.736 "trtype": "tcp", 00:14:10.736 "traddr": "10.0.0.2", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "4420", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.736 "hdgst": false, 00:14:10.736 "ddgst": false 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 }' 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme1", 00:14:10.736 "trtype": "tcp", 00:14:10.736 "traddr": "10.0.0.2", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "4420", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.736 "hdgst": false, 00:14:10.736 "ddgst": false 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 }' 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme1", 00:14:10.736 "trtype": "tcp", 00:14:10.736 "traddr": "10.0.0.2", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "4420", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.736 "hdgst": false, 00:14:10.736 "ddgst": false 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 }' 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:10.736 21:52:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.736 "params": { 00:14:10.736 "name": "Nvme1", 00:14:10.736 "trtype": "tcp", 00:14:10.736 "traddr": "10.0.0.2", 00:14:10.736 "adrfam": "ipv4", 00:14:10.736 "trsvcid": "4420", 00:14:10.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.736 "hdgst": false, 00:14:10.736 "ddgst": false 00:14:10.736 }, 00:14:10.736 "method": "bdev_nvme_attach_controller" 00:14:10.736 }' 00:14:10.736 [2024-07-15 21:52:04.896122] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:10.736 [2024-07-15 21:52:04.896177] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:10.736 [2024-07-15 21:52:04.897119] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:10.736 [2024-07-15 21:52:04.897165] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:10.736 [2024-07-15 21:52:04.897976] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:10.736 [2024-07-15 21:52:04.898018] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:10.736 [2024-07-15 21:52:04.901599] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:10.736 [2024-07-15 21:52:04.901637] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:10.736 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.995 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.995 [2024-07-15 21:52:05.076586] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.995 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.995 [2024-07-15 21:52:05.154483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.995 [2024-07-15 21:52:05.177854] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.995 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.995 [2024-07-15 21:52:05.218634] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.254 [2024-07-15 21:52:05.257124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:11.254 [2024-07-15 21:52:05.287187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:11.254 [2024-07-15 21:52:05.330453] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.254 [2024-07-15 21:52:05.425410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:11.514 Running I/O for 1 seconds... 00:14:11.514 Running I/O for 1 seconds... 00:14:11.514 Running I/O for 1 seconds... 00:14:11.514 Running I/O for 1 seconds... 00:14:12.475 00:14:12.475 Latency(us) 00:14:12.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.475 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:12.475 Nvme1n1 : 1.00 244897.80 956.63 0.00 0.00 520.23 211.03 694.54 00:14:12.475 =================================================================================================================== 00:14:12.475 Total : 244897.80 956.63 0.00 0.00 520.23 211.03 694.54 00:14:12.475 00:14:12.475 Latency(us) 00:14:12.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.475 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:12.475 Nvme1n1 : 1.01 11701.56 45.71 0.00 0.00 10903.31 5841.25 18236.10 00:14:12.475 =================================================================================================================== 00:14:12.475 Total : 11701.56 45.71 0.00 0.00 10903.31 5841.25 18236.10 00:14:12.475 00:14:12.475 Latency(us) 00:14:12.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.475 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:12.475 Nvme1n1 : 1.01 11047.20 43.15 0.00 0.00 11547.71 6325.65 25758.50 00:14:12.475 =================================================================================================================== 00:14:12.475 Total : 11047.20 43.15 0.00 0.00 11547.71 6325.65 25758.50 00:14:12.475 00:14:12.475 Latency(us) 00:14:12.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.475 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:12.475 Nvme1n1 : 1.01 9525.57 37.21 0.00 0.00 13391.91 6667.58 24162.84 00:14:12.475 =================================================================================================================== 00:14:12.475 Total : 9525.57 37.21 0.00 0.00 13391.91 6667.58 24162.84 00:14:12.734 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3654171 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3654174 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3654178 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.994 21:52:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.994 rmmod nvme_tcp 00:14:12.994 rmmod nvme_fabrics 00:14:12.994 rmmod nvme_keyring 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3654024 ']' 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3654024 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3654024 ']' 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3654024 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3654024 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3654024' 00:14:12.994 killing process with pid 3654024 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3654024 00:14:12.994 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3654024 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.253 21:52:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.160 21:52:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.160 00:14:15.160 real 0m10.665s 00:14:15.160 user 0m20.143s 00:14:15.160 sys 0m5.531s 00:14:15.161 21:52:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.161 21:52:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 ************************************ 00:14:15.161 END TEST nvmf_bdev_io_wait 00:14:15.161 ************************************ 00:14:15.161 21:52:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.161 21:52:09 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.161 21:52:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.161 21:52:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.161 21:52:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 ************************************ 00:14:15.161 START TEST nvmf_queue_depth 00:14:15.161 ************************************ 00:14:15.161 21:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.421 * Looking for test storage... 00:14:15.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.421 21:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.725 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:20.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:20.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:20.726 Found net devices under 0000:86:00.0: cvl_0_0 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:20.726 Found net devices under 0000:86:00.1: cvl_0_1 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:20.726 00:14:20.726 --- 10.0.0.2 ping statistics --- 00:14:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.726 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:14:20.726 00:14:20.726 --- 10.0.0.1 ping statistics --- 00:14:20.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.726 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3658055 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3658055 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3658055 ']' 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.726 21:52:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:20.726 [2024-07-15 21:52:14.827061] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:20.726 [2024-07-15 21:52:14.827104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.726 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.726 [2024-07-15 21:52:14.883378] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.726 [2024-07-15 21:52:14.961916] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.726 [2024-07-15 21:52:14.961951] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.726 [2024-07-15 21:52:14.961958] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.726 [2024-07-15 21:52:14.961964] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.726 [2024-07-15 21:52:14.961969] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.726 [2024-07-15 21:52:14.961986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.663 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.663 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 [2024-07-15 21:52:15.667616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 Malloc0 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 [2024-07-15 21:52:15.725419] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3658090 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3658090 /var/tmp/bdevperf.sock 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3658090 ']' 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.664 21:52:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 [2024-07-15 21:52:15.774663] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:21.664 [2024-07-15 21:52:15.774707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658090 ] 00:14:21.664 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.664 [2024-07-15 21:52:15.829523] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.923 [2024-07-15 21:52:15.910786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:22.492 NVMe0n1 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.492 21:52:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:22.492 Running I/O for 10 seconds... 00:14:34.704 00:14:34.704 Latency(us) 00:14:34.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.704 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:34.704 Verification LBA range: start 0x0 length 0x4000 00:14:34.704 NVMe0n1 : 10.06 12310.94 48.09 0.00 0.00 82915.51 19831.76 57443.73 00:14:34.704 =================================================================================================================== 00:14:34.704 Total : 12310.94 48.09 0.00 0.00 82915.51 19831.76 57443.73 00:14:34.704 0 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3658090 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3658090 ']' 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3658090 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3658090 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3658090' 00:14:34.704 killing process with pid 3658090 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3658090 00:14:34.704 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.704 00:14:34.704 Latency(us) 00:14:34.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.704 =================================================================================================================== 00:14:34.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.704 21:52:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3658090 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.704 rmmod nvme_tcp 00:14:34.704 rmmod nvme_fabrics 00:14:34.704 rmmod nvme_keyring 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3658055 ']' 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3658055 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3658055 ']' 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3658055 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3658055 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3658055' 00:14:34.704 killing process with pid 3658055 00:14:34.704 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3658055 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3658055 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.705 21:52:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.273 21:52:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.273 00:14:35.273 real 0m20.035s 00:14:35.273 user 0m24.796s 00:14:35.273 sys 0m5.397s 00:14:35.273 21:52:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.273 21:52:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:35.273 ************************************ 00:14:35.273 END TEST nvmf_queue_depth 00:14:35.273 ************************************ 00:14:35.273 21:52:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.273 21:52:29 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:35.273 21:52:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.273 21:52:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.273 21:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.273 ************************************ 00:14:35.273 START TEST nvmf_target_multipath 00:14:35.273 ************************************ 00:14:35.273 21:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:35.533 * Looking for test storage... 00:14:35.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.533 21:52:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:40.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:40.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:40.810 Found net devices under 0000:86:00.0: cvl_0_0 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:40.810 Found net devices under 0000:86:00.1: cvl_0_1 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:14:40.810 00:14:40.810 --- 10.0.0.2 ping statistics --- 00:14:40.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.810 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:40.810 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:14:40.810 00:14:40.811 --- 10.0.0.1 ping statistics --- 00:14:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.811 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:40.811 only one NIC for nvmf test 00:14:40.811 21:52:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.811 rmmod nvme_tcp 00:14:40.811 rmmod nvme_fabrics 00:14:40.811 rmmod nvme_keyring 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:40.811 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.070 21:52:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:42.976 00:14:42.976 real 0m7.631s 00:14:42.976 user 0m1.539s 00:14:42.976 sys 0m4.086s 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.976 21:52:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:42.976 ************************************ 00:14:42.976 END TEST nvmf_target_multipath 00:14:42.976 ************************************ 00:14:42.976 21:52:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:42.976 21:52:37 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:42.976 21:52:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:42.976 21:52:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.976 21:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.976 ************************************ 00:14:42.976 START TEST nvmf_zcopy 00:14:42.976 ************************************ 00:14:42.976 21:52:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:43.236 * Looking for test storage... 00:14:43.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.236 21:52:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:48.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:48.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:48.567 Found net devices under 0000:86:00.0: cvl_0_0 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:48.567 Found net devices under 0000:86:00.1: cvl_0_1 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.567 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.568 21:52:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:14:48.568 00:14:48.568 --- 10.0.0.2 ping statistics --- 00:14:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.568 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:14:48.568 00:14:48.568 --- 10.0.0.1 ping statistics --- 00:14:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.568 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3666712 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3666712 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3666712 ']' 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.568 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.568 [2024-07-15 21:52:42.153791] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:48.568 [2024-07-15 21:52:42.153836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.568 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.568 [2024-07-15 21:52:42.211246] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.568 [2024-07-15 21:52:42.290748] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.568 [2024-07-15 21:52:42.290782] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.568 [2024-07-15 21:52:42.290789] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.568 [2024-07-15 21:52:42.290795] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.568 [2024-07-15 21:52:42.290801] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.568 [2024-07-15 21:52:42.290818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 [2024-07-15 21:52:42.986656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 [2024-07-15 21:52:43.002767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 malloc0 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:48.828 { 00:14:48.828 "params": { 00:14:48.828 "name": "Nvme$subsystem", 00:14:48.828 "trtype": "$TEST_TRANSPORT", 00:14:48.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.828 "adrfam": "ipv4", 00:14:48.828 "trsvcid": "$NVMF_PORT", 00:14:48.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.828 "hdgst": ${hdgst:-false}, 00:14:48.828 "ddgst": ${ddgst:-false} 00:14:48.828 }, 00:14:48.828 "method": "bdev_nvme_attach_controller" 00:14:48.828 } 00:14:48.828 EOF 00:14:48.828 )") 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:48.828 21:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:48.828 "params": { 00:14:48.828 "name": "Nvme1", 00:14:48.828 "trtype": "tcp", 00:14:48.828 "traddr": "10.0.0.2", 00:14:48.828 "adrfam": "ipv4", 00:14:48.828 "trsvcid": "4420", 00:14:48.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.828 "hdgst": false, 00:14:48.828 "ddgst": false 00:14:48.828 }, 00:14:48.828 "method": "bdev_nvme_attach_controller" 00:14:48.828 }' 00:14:49.087 [2024-07-15 21:52:43.080418] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:49.087 [2024-07-15 21:52:43.080462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666960 ] 00:14:49.087 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.087 [2024-07-15 21:52:43.133209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.087 [2024-07-15 21:52:43.206476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.346 Running I/O for 10 seconds... 00:14:59.328 00:14:59.328 Latency(us) 00:14:59.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:59.328 Verification LBA range: start 0x0 length 0x1000 00:14:59.328 Nvme1n1 : 10.01 8642.44 67.52 0.00 0.00 14768.25 2692.67 27126.21 00:14:59.328 =================================================================================================================== 00:14:59.328 Total : 8642.44 67.52 0.00 0.00 14768.25 2692.67 27126.21 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3668688 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:59.588 { 00:14:59.588 "params": { 00:14:59.588 "name": "Nvme$subsystem", 00:14:59.588 "trtype": "$TEST_TRANSPORT", 00:14:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:59.588 "adrfam": "ipv4", 00:14:59.588 "trsvcid": "$NVMF_PORT", 00:14:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:59.588 "hdgst": ${hdgst:-false}, 00:14:59.588 "ddgst": ${ddgst:-false} 00:14:59.588 }, 00:14:59.588 "method": "bdev_nvme_attach_controller" 00:14:59.588 } 00:14:59.588 EOF 00:14:59.588 )") 00:14:59.588 [2024-07-15 21:52:53.664500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.664533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:59.588 [2024-07-15 21:52:53.672492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.672505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:59.588 21:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:59.588 "params": { 00:14:59.588 "name": "Nvme1", 00:14:59.588 "trtype": "tcp", 00:14:59.588 "traddr": "10.0.0.2", 00:14:59.588 "adrfam": "ipv4", 00:14:59.588 "trsvcid": "4420", 00:14:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:59.588 "hdgst": false, 00:14:59.588 "ddgst": false 00:14:59.588 }, 00:14:59.588 "method": "bdev_nvme_attach_controller" 00:14:59.588 }' 00:14:59.588 [2024-07-15 21:52:53.680509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.680521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.688528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.688539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.696550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.696561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.701808] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:14:59.588 [2024-07-15 21:52:53.701850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668688 ] 00:14:59.588 [2024-07-15 21:52:53.704572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.704583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.712593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.712604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.720615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.720625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.588 [2024-07-15 21:52:53.728637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.728648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.736657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.736667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.744677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.744687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.752696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.752707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.756517] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.588 [2024-07-15 21:52:53.760720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.760731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.768745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.768757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.776763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.776778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.784787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.784799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.792808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.792820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.800830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.800849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.808851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.808861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.816870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.816880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.588 [2024-07-15 21:52:53.824893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.588 [2024-07-15 21:52:53.824903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.832692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.848 [2024-07-15 21:52:53.832914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.832926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.840935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.840947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.848967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.848986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.856982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.856994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.865005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.865016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.877037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.877048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.885054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.885064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.893078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.893089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.901098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.901108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.909119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.909129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.917141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.917152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.925181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.925200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.933190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.933204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.941212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.941228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.949238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.949269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.957258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.957287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.965284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.965301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.973296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.973306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.982617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.982635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:53.989399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.989412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 Running I/O for 5 seconds... 00:14:59.848 [2024-07-15 21:52:53.997418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:53.997427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:54.009921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:54.009945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:54.020782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:54.020802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:54.029216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:54.029239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:54.037526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:54.037546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-15 21:52:54.046838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-15 21:52:54.046856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-15 21:52:54.056032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-15 21:52:54.056051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-15 21:52:54.065066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-15 21:52:54.065085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-15 21:52:54.073424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-15 21:52:54.073443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-15 21:52:54.081723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-15 21:52:54.081742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.090316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.090338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.098849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.098867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.107835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.107854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.116204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.116222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.123731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.123750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.133608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.133627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.142251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.142270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.150647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.150665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.159147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.159165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.167740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.167759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.176312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.176332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.184536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.184556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.192863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.192881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.201455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.201474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.210610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.210629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.218984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.219002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.227412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.227431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.235833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.235852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.243987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.244005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.252390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.252412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.261523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.261542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.270498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.270517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.279725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.279745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.288016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.288035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.297024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.297043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.303792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.303810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.314243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.314278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.323752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.323772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.332831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.332852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-15 21:52:54.342191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-15 21:52:54.342211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.351005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.351025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.359406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.359424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.368776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.368796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.377975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.377994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.386485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.386505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.394912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.394931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.403350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.403369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.412292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.412311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.421559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.421581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.430844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.430863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.438800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.438819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.447981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.447999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.457085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.457104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.465765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.465784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.474303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.474323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.483882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.483901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.492610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.492629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.501155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.501174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.368 [2024-07-15 21:52:54.510322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.368 [2024-07-15 21:52:54.510341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.519527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.519546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.527993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.528011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.536494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.536514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.544913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.544931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.554049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.554068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.563107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.563125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.571806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.571824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.580708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.580726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.589774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.589797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.599193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.599212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.369 [2024-07-15 21:52:54.607676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.369 [2024-07-15 21:52:54.607694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.617188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.617207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.626083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.626102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.636055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.636073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.645011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.645029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.654218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.654243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.663529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.663551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.672614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.672633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.682103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.682123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.690578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.690597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.699193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.699211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.708426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.708444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.717597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.717614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.726884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.726903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.735502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.735520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.744148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.744166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.752585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.752603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.761810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.761832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.770438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.770456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.779772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.779790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.789104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.789122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.798091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.798110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.807840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.807859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.816445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.816463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.825740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.825759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.835503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.835522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.844698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.844718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.853861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.853880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.629 [2024-07-15 21:52:54.863119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.629 [2024-07-15 21:52:54.863138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.872467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.872488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.882194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.882213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.891368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.891388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.900020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.900040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.909255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.909274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.917683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.917702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.924453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.924471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.935461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.935484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.944044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.944062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.952541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.952560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.961920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.961939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.971204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.971223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.980969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.980988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.989463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.989482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:54.998422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:54.998441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:55.005362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:55.005381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:55.016398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:55.016418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:55.024690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.888 [2024-07-15 21:52:55.024710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.888 [2024-07-15 21:52:55.033918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.033936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.042145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.042171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.051182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.051201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.059694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.059713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.068899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.068918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.077653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.077672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.086891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.086910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.096076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.096095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.105508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.105527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.114717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.114736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.889 [2024-07-15 21:52:55.123803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.889 [2024-07-15 21:52:55.123823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.147 [2024-07-15 21:52:55.132386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.147 [2024-07-15 21:52:55.132405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.147 [2024-07-15 21:52:55.141608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.147 [2024-07-15 21:52:55.141627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.147 [2024-07-15 21:52:55.150656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.147 [2024-07-15 21:52:55.150675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.159024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.159043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.167649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.167667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.176042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.176061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.184542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.184562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.194107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.194127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.202800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.202819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.210956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.210975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.219277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.219296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.225992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.226011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.236319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.236338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.245153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.245171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.253584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.253601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.262089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.262108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.270492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.270510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.279673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.279692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.288123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.288141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.296529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.296548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.305136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.305155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.313597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.313617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.322102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.322120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.328864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.328882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.339911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.339929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.348762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.348782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.358032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.358051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.366668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.366692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.375697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.375716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.148 [2024-07-15 21:52:55.385230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.148 [2024-07-15 21:52:55.385249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.394537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.394557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.403151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.403170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.411885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.411904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.420614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.420633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.429436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.429453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.438273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.438292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.447863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.447883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.457836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.457856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.467065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.467083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.475351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.475370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.482697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.482715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.492736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.492755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.501737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.501755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.510888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.510907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.519386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.519404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.527603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.527621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.536024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.536042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.544666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.544685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.553842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.553861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.561122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.561141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.571673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.571691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.580000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.580019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.588601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.588620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.597174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.597196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.605565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.605584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.614185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.614203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.622749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.622767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.629520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.629538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.408 [2024-07-15 21:52:55.640418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.408 [2024-07-15 21:52:55.640437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.648962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.648981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.658099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.658117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.667058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.667077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.673807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.673824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.684718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.684737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.693614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.693633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.701833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.701851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.710232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.710250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.718585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.718603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.727075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.727093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.735612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.735630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.744116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.744134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.752746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.752764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.761331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.761353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.770469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.770498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.779717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.779735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.788904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.788923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.797440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.797458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.806136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.806154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.814623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.814642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.823126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.668 [2024-07-15 21:52:55.823145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.668 [2024-07-15 21:52:55.832187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.832205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.838891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.838909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.849928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.849947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.858807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.858826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.867107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.867126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.875644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.875662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.884271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.884290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.892640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.892659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.669 [2024-07-15 21:52:55.901617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.669 [2024-07-15 21:52:55.901636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.927 [2024-07-15 21:52:55.910355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.927 [2024-07-15 21:52:55.910376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.927 [2024-07-15 21:52:55.917328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.927 [2024-07-15 21:52:55.917346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.927 [2024-07-15 21:52:55.928445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.928467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.937366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.937385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.945902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.945920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.952675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.952693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.963932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.963950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.970817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.970835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.980913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.980932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.989147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.989165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:55.997778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:55.997796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.006427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.006447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.015643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.015661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.024188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.024206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.033207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.033231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.039895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.039913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.051000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.051020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.058164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.058181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.068018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.068038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.076636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.076656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.085456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.085474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.093927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.093949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.102411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.102430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.111543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.111562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.120693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.120712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.129967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.129985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.139156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.139174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.148292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.148311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.156881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.156900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.928 [2024-07-15 21:52:56.166275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.928 [2024-07-15 21:52:56.166293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.174909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.174928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.183516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.183535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.192052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.192069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.200879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.200898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.209660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.209679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.218163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.218181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.227537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.227555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.236477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.236495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.245430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.245449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.254036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.254055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.262928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.262952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.271582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.271601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.280766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.280786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.289869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.289890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.298589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.298609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.307741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.307760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.186 [2024-07-15 21:52:56.316745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.186 [2024-07-15 21:52:56.316765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.325762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.325781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.335121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.335140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.343642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.343661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.352157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.352187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.361308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.361328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.370518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.370538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.378994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.379013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.388297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.388316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.397306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.397325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.406183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.406202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.414657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.414675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.187 [2024-07-15 21:52:56.423482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.187 [2024-07-15 21:52:56.423501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.445 [2024-07-15 21:52:56.432545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.445 [2024-07-15 21:52:56.432565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.445 [2024-07-15 21:52:56.441719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.445 [2024-07-15 21:52:56.441737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.445 [2024-07-15 21:52:56.449986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.450005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.458699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.458719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.467503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.467522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.476492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.476511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.483616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.483635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.493858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.493877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.503175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.503194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.510372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.510391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.520774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.520793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.529491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.529510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.538886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.538905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.547470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.547490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.556050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.556069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.564646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.564667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.573325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.573344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.581839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.581858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.590365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.590384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.599757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.599776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.609063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.609082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.618042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.618060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.627321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.627339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.636507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.636526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.645590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.645608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.654139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.654157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.663440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.663459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.673266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.673284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.446 [2024-07-15 21:52:56.681793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.446 [2024-07-15 21:52:56.681811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.688802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.688822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.699880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.699899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.708493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.708511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.718286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.718305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.726961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.726979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.736051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.736070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.745311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.745330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.754559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.754577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.763266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.763285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.771766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.771784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.780205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.780223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.788766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.788785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.797198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.797216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.805658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.805677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.812393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.812411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.822509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.822528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.831186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.831205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.840416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.840434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.849673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.849692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.856421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.856440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.867290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.867309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.914626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.914645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.705 [2024-07-15 21:52:56.924793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.705 [2024-07-15 21:52:56.924811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.706 [2024-07-15 21:52:56.933200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.706 [2024-07-15 21:52:56.933218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.706 [2024-07-15 21:52:56.941947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.706 [2024-07-15 21:52:56.941966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.950430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.950450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.958940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.958959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.968416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.968439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.977612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.977631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.986075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.986094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:56.995160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:56.995179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:57.003975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:57.003994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:57.012481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.964 [2024-07-15 21:52:57.012500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.964 [2024-07-15 21:52:57.020873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.020891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.029352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.029370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.037843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.037862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.046368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.046387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.054843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.054862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.063944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.063963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.070690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.070708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.081919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.081938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.090396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.090414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.098920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.098939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.106180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.106198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.116660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.116679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.125516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.125534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.133901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.133924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.143271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.143290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.152499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.152517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.161329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.161347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.169503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.169521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.178440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.178458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.187674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.187694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.196134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.196152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.965 [2024-07-15 21:52:57.204561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.965 [2024-07-15 21:52:57.204580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.213444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.213464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.221954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.221973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.230495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.230513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.239165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.239183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.247699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.247717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.256373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.256391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.263091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.263109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.274343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.274361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.282895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.282914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.291904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.291922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.300379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.300401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.308918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.308937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.317743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.317762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.326748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.326767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.335917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.335936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.344866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.344885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.354172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.354190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.362817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.362836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.372288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.372307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.380958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.380979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.389474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.389494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.397774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.397792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.406410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.406428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.415388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.415407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.424327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.424345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.432860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.432878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.441755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.441774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.451118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.451136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.224 [2024-07-15 21:52:57.459930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.224 [2024-07-15 21:52:57.459948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.468782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.468804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.477751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.477769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.485914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.485932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.494400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.494418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.502568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.502586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.510947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.510966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.519278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.519297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.566633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.566652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.577430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.577448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.585907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.585927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.595064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.595083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.603683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.603706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.612803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.612822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.621600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.621619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.630237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.630257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.639777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.639795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.646824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.646844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.657458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.657479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.666323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.666342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.674871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.674895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.683375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.683394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.691688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.691707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.709102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.709122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.483 [2024-07-15 21:52:57.718083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.483 [2024-07-15 21:52:57.718102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.727334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.727355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.735838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.735857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.744764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.744782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.753050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.753070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.762070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.762091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.771278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.771297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.779829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.779848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.788643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.788662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.797841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.797861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.806441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.806459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.815338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.815356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.824407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.824426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.833566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.833585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.842198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.842216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.849085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.849105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.860022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.860041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.868192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.868212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.877374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.877393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.886502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.886522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.895768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.895787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.904158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.904177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.912813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.912833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.922096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.922115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.931192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.931211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.940335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.940353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.948994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.949012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.958330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.958349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.743 [2024-07-15 21:52:57.966951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.743 [2024-07-15 21:52:57.966969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.744 [2024-07-15 21:52:57.976014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.744 [2024-07-15 21:52:57.976033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:57.985264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:57.985286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:57.994533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:57.994553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.003697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.003716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.012776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.012796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.021141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.021161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.030270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.030289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.039427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.039445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.047867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.047885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.057191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.057210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.066502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.066520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.075601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.075620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.084625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.084643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.093162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.093180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.102575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.102594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.111542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.111561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.120812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.120831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.129879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.129897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.138431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.138449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.147609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.147627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.156522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.156541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.165873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.165892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.175011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.175029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.183468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.183487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.192024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.192042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.200571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.200589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.209147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.209165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.217556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.217574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.226511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.226530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.001 [2024-07-15 21:52:58.235795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.001 [2024-07-15 21:52:58.235814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.244564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.244584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.253255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.253274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.262614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.262632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.271527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.271545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.280179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.280198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.289292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.289311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.297703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.297721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.307204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.307223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.315910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.315929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.324535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.324554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.332990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.333009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.341177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.341195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.350039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.350063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.359081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.359100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.367950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.367968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.376547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.376566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.385019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.385037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.393680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.393697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.402026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.402046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.411093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.411113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.420376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.420395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.429043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.429062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.438215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.438242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.447376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.447395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.456315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.456334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.465494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.465514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.473956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.473975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.482231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.482249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.488939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.488957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.260 [2024-07-15 21:52:58.499505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.260 [2024-07-15 21:52:58.499525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.517 [2024-07-15 21:52:58.508139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.517 [2024-07-15 21:52:58.508159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.514913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.514936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.526102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.526121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.532959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.532978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.543465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.543493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.552259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.552278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.560848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.560867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.569586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.569605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.578393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.578411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.587374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.587392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.595920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.595941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.604785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.604803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.613304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.613322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.621851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.621869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.630432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.630451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.638755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.638775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.646150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.646169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.656651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.656670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.665163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.665182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.673828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.673847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.683295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.683317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.691680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.691699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.700503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.700521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.709296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.709315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.716033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.716050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.726386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.726405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.734813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.734831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.743206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.743230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.518 [2024-07-15 21:52:58.751739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.518 [2024-07-15 21:52:58.751757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.759571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.759591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.769144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.769163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.778233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.778252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.786764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.786783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.795495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.795514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.804177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.804195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.812605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.812624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.821390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.821409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.830074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.830092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.839038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.839056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.848056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.848079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.856494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.856513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.865063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.865081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.873172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.873191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.882135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.882154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.890898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.890917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.899245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.899263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.907969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.907987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.917158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.917176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.925792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.925811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.934499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.934518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.943160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.943180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 [2024-07-15 21:52:58.950256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.776 [2024-07-15 21:52:58.950274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:58.960022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:58.960041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:58.969137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:58.969155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:58.978419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:58.978437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:58.985217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:58.985242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:58.996602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:58.996621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 [2024-07-15 21:52:59.004900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:59.004918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.777 00:15:04.777 Latency(us) 00:15:04.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.777 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:04.777 Nvme1n1 : 5.01 16411.43 128.21 0.00 0.00 7793.49 2507.46 48325.68 00:15:04.777 =================================================================================================================== 00:15:04.777 Total : 16411.43 128.21 0.00 0.00 7793.49 2507.46 48325.68 00:15:04.777 [2024-07-15 21:52:59.013708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.777 [2024-07-15 21:52:59.013726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.019302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.019318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.027318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.027334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.035339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.035352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.043368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.043388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.051379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.051394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.059401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.059416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.067421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.067437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.075441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.075456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.083467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.083481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.091483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.091496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.099503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.099516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.107524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.107535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.115549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.115561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.123565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.123576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.131587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.131597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.139614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.139627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.147635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.147647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.155654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.155665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.163676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.163686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.171699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.171711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.179721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.179734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.187743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.187755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 [2024-07-15 21:52:59.195764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.036 [2024-07-15 21:52:59.195775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3668688) - No such process 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3668688 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.036 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 delay0 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.037 21:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:05.037 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.037 [2024-07-15 21:52:59.263610] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:11.606 Initializing NVMe Controllers 00:15:11.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:11.606 Initialization complete. Launching workers. 00:15:11.606 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 99 00:15:11.606 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 33 00:15:11.606 success 175, unsuccess 211, failed 0 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.606 rmmod nvme_tcp 00:15:11.606 rmmod nvme_fabrics 00:15:11.606 rmmod nvme_keyring 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3666712 ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3666712 ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3666712' 00:15:11.606 killing process with pid 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3666712 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.606 21:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.567 21:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.567 00:15:13.567 real 0m30.491s 00:15:13.567 user 0m41.776s 00:15:13.567 sys 0m9.612s 00:15:13.567 21:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.567 21:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.567 ************************************ 00:15:13.567 END TEST nvmf_zcopy 00:15:13.567 ************************************ 00:15:13.567 21:53:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.567 21:53:07 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:13.567 21:53:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.567 21:53:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.567 21:53:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.567 ************************************ 00:15:13.567 START TEST nvmf_nmic 00:15:13.567 ************************************ 00:15:13.567 21:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:13.827 * Looking for test storage... 00:15:13.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.827 21:53:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:19.099 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:19.099 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:19.099 Found net devices under 0000:86:00.0: cvl_0_0 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:19.099 Found net devices under 0000:86:00.1: cvl_0_1 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.099 21:53:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:15:19.099 00:15:19.099 --- 10.0.0.2 ping statistics --- 00:15:19.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.099 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:15:19.099 00:15:19.099 --- 10.0.0.1 ping statistics --- 00:15:19.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.099 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3674014 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3674014 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3674014 ']' 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.099 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.100 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.100 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:19.100 [2024-07-15 21:53:13.161407] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:19.100 [2024-07-15 21:53:13.161450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.100 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.100 [2024-07-15 21:53:13.222504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.100 [2024-07-15 21:53:13.304778] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.100 [2024-07-15 21:53:13.304830] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.100 [2024-07-15 21:53:13.304838] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.100 [2024-07-15 21:53:13.304845] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.100 [2024-07-15 21:53:13.304850] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.100 [2024-07-15 21:53:13.304910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.100 [2024-07-15 21:53:13.305007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.100 [2024-07-15 21:53:13.305018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.100 [2024-07-15 21:53:13.305020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.034 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.034 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:20.034 21:53:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.034 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.034 21:53:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 [2024-07-15 21:53:14.013214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 Malloc0 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 [2024-07-15 21:53:14.065183] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:20.034 test case1: single bdev can't be used in multiple subsystems 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 [2024-07-15 21:53:14.089085] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:20.034 [2024-07-15 21:53:14.089106] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:20.034 [2024-07-15 21:53:14.089113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.034 request: 00:15:20.034 { 00:15:20.034 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:20.034 "namespace": { 00:15:20.034 "bdev_name": "Malloc0", 00:15:20.034 "no_auto_visible": false 00:15:20.034 }, 00:15:20.034 "method": "nvmf_subsystem_add_ns", 00:15:20.034 "req_id": 1 00:15:20.034 } 00:15:20.034 Got JSON-RPC error response 00:15:20.034 response: 00:15:20.034 { 00:15:20.034 "code": -32602, 00:15:20.034 "message": "Invalid parameters" 00:15:20.034 } 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:20.034 Adding namespace failed - expected result. 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:20.034 test case2: host connect to nvmf target in multiple paths 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 [2024-07-15 21:53:14.101201] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.034 21:53:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:20.970 21:53:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:22.347 21:53:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.347 21:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.347 21:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.347 21:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:22.347 21:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.248 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.248 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.248 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.248 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.248 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.249 21:53:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:24.249 21:53:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:24.249 [global] 00:15:24.249 thread=1 00:15:24.249 invalidate=1 00:15:24.249 rw=write 00:15:24.249 time_based=1 00:15:24.249 runtime=1 00:15:24.249 ioengine=libaio 00:15:24.249 direct=1 00:15:24.249 bs=4096 00:15:24.249 iodepth=1 00:15:24.249 norandommap=0 00:15:24.249 numjobs=1 00:15:24.249 00:15:24.249 verify_dump=1 00:15:24.249 verify_backlog=512 00:15:24.249 verify_state_save=0 00:15:24.249 do_verify=1 00:15:24.249 verify=crc32c-intel 00:15:24.249 [job0] 00:15:24.249 filename=/dev/nvme0n1 00:15:24.249 Could not set queue depth (nvme0n1) 00:15:24.506 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:24.506 fio-3.35 00:15:24.506 Starting 1 thread 00:15:25.882 00:15:25.882 job0: (groupid=0, jobs=1): err= 0: pid=3675017: Mon Jul 15 21:53:19 2024 00:15:25.882 read: IOPS=101, BW=407KiB/s (417kB/s)(424KiB/1041msec) 00:15:25.882 slat (nsec): min=6597, max=23788, avg=11075.62, stdev=6112.83 00:15:25.882 clat (usec): min=308, max=42043, avg=8586.17, stdev=16539.91 00:15:25.882 lat (usec): min=316, max=42066, avg=8597.25, stdev=16545.64 00:15:25.882 clat percentiles (usec): 00:15:25.882 | 1.00th=[ 310], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 359], 00:15:25.882 | 30.00th=[ 367], 40.00th=[ 388], 50.00th=[ 420], 60.00th=[ 429], 00:15:25.882 | 70.00th=[ 453], 80.00th=[ 889], 90.00th=[42206], 95.00th=[42206], 00:15:25.882 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:25.882 | 99.99th=[42206] 00:15:25.882 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:15:25.882 slat (usec): min=9, max=25245, avg=60.02, stdev=1115.24 00:15:25.882 clat (usec): min=164, max=384, avg=190.42, stdev=29.17 00:15:25.882 lat (usec): min=175, max=25576, avg=250.44, stdev=1121.85 00:15:25.882 clat percentiles (usec): 00:15:25.882 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:15:25.882 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:15:25.882 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 243], 95.00th=[ 265], 00:15:25.882 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 383], 99.95th=[ 383], 00:15:25.882 | 99.99th=[ 383] 00:15:25.882 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:25.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:25.882 lat (usec) : 250=77.51%, 500=18.45%, 750=0.49%, 1000=0.16% 00:15:25.882 lat (msec) : 50=3.40% 00:15:25.882 cpu : usr=0.58%, sys=0.29%, ctx=621, majf=0, minf=2 00:15:25.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.882 issued rwts: total=106,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.882 00:15:25.882 Run status group 0 (all jobs): 00:15:25.882 READ: bw=407KiB/s (417kB/s), 407KiB/s-407KiB/s (417kB/s-417kB/s), io=424KiB (434kB), run=1041-1041msec 00:15:25.882 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:15:25.882 00:15:25.882 Disk stats (read/write): 00:15:25.882 nvme0n1: ios=154/512, merge=0/0, ticks=1036/92, in_queue=1128, util=98.60% 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:25.882 21:53:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.882 rmmod nvme_tcp 00:15:25.882 rmmod nvme_fabrics 00:15:25.882 rmmod nvme_keyring 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3674014 ']' 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3674014 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3674014 ']' 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3674014 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.882 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3674014 00:15:25.883 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.883 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.883 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3674014' 00:15:25.883 killing process with pid 3674014 00:15:25.883 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3674014 00:15:25.883 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3674014 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.142 21:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.679 21:53:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.679 00:15:28.679 real 0m14.622s 00:15:28.679 user 0m34.760s 00:15:28.679 sys 0m4.620s 00:15:28.679 21:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.679 21:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:28.679 ************************************ 00:15:28.679 END TEST nvmf_nmic 00:15:28.679 ************************************ 00:15:28.679 21:53:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.679 21:53:22 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:28.679 21:53:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.679 21:53:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.679 21:53:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.679 ************************************ 00:15:28.679 START TEST nvmf_fio_target 00:15:28.679 ************************************ 00:15:28.679 21:53:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:28.679 * Looking for test storage... 00:15:28.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.679 21:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.679 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:28.679 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.679 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.680 21:53:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.874 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.874 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.874 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:15:33.134 00:15:33.134 --- 10.0.0.2 ping statistics --- 00:15:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.134 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:15:33.134 00:15:33.134 --- 10.0.0.1 ping statistics --- 00:15:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.134 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3678716 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3678716 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3678716 ']' 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.134 21:53:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.394 [2024-07-15 21:53:27.402752] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:33.394 [2024-07-15 21:53:27.402794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.394 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.394 [2024-07-15 21:53:27.459680] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.394 [2024-07-15 21:53:27.540529] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.394 [2024-07-15 21:53:27.540564] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.394 [2024-07-15 21:53:27.540571] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.394 [2024-07-15 21:53:27.540578] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.394 [2024-07-15 21:53:27.540583] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.394 [2024-07-15 21:53:27.540617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.394 [2024-07-15 21:53:27.540714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.394 [2024-07-15 21:53:27.540731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.394 [2024-07-15 21:53:27.540733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.333 [2024-07-15 21:53:28.402631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.333 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.644 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:34.644 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.644 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:34.644 21:53:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.915 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:34.915 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.174 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:35.174 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:35.174 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.433 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:35.433 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.692 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:35.692 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.951 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:35.951 21:53:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:35.951 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.210 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:36.210 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.470 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:36.470 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.729 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.729 [2024-07-15 21:53:30.877027] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.729 21:53:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:36.988 21:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:37.247 21:53:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:38.185 21:53:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:40.723 21:53:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:40.723 [global] 00:15:40.723 thread=1 00:15:40.723 invalidate=1 00:15:40.723 rw=write 00:15:40.723 time_based=1 00:15:40.723 runtime=1 00:15:40.723 ioengine=libaio 00:15:40.723 direct=1 00:15:40.723 bs=4096 00:15:40.723 iodepth=1 00:15:40.723 norandommap=0 00:15:40.723 numjobs=1 00:15:40.723 00:15:40.723 verify_dump=1 00:15:40.723 verify_backlog=512 00:15:40.723 verify_state_save=0 00:15:40.723 do_verify=1 00:15:40.723 verify=crc32c-intel 00:15:40.723 [job0] 00:15:40.723 filename=/dev/nvme0n1 00:15:40.723 [job1] 00:15:40.723 filename=/dev/nvme0n2 00:15:40.723 [job2] 00:15:40.723 filename=/dev/nvme0n3 00:15:40.723 [job3] 00:15:40.723 filename=/dev/nvme0n4 00:15:40.723 Could not set queue depth (nvme0n1) 00:15:40.723 Could not set queue depth (nvme0n2) 00:15:40.723 Could not set queue depth (nvme0n3) 00:15:40.723 Could not set queue depth (nvme0n4) 00:15:40.723 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.723 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.723 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.723 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.723 fio-3.35 00:15:40.723 Starting 4 threads 00:15:42.101 00:15:42.101 job0: (groupid=0, jobs=1): err= 0: pid=3680104: Mon Jul 15 21:53:35 2024 00:15:42.101 read: IOPS=437, BW=1749KiB/s (1791kB/s)(1784KiB/1020msec) 00:15:42.101 slat (nsec): min=6446, max=30218, avg=8051.15, stdev=3322.98 00:15:42.101 clat (usec): min=282, max=41959, avg=1997.00, stdev=8036.25 00:15:42.101 lat (usec): min=290, max=41982, avg=2005.06, stdev=8038.57 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 330], 00:15:42.101 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:15:42.101 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 420], 95.00th=[ 578], 00:15:42.101 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:15:42.101 | 99.99th=[42206] 00:15:42.101 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:15:42.101 slat (nsec): min=10389, max=51150, avg=14388.64, stdev=2621.02 00:15:42.101 clat (usec): min=159, max=469, avg=224.89, stdev=45.63 00:15:42.101 lat (usec): min=171, max=520, avg=239.28, stdev=46.05 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:15:42.101 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 237], 00:15:42.101 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 314], 00:15:42.101 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 469], 99.95th=[ 469], 00:15:42.101 | 99.99th=[ 469] 00:15:42.101 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:15:42.101 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:42.101 lat (usec) : 250=39.56%, 500=57.62%, 750=0.94% 00:15:42.101 lat (msec) : 50=1.88% 00:15:42.101 cpu : usr=0.39%, sys=1.28%, ctx=960, majf=0, minf=1 00:15:42.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 issued rwts: total=446,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.101 job1: (groupid=0, jobs=1): err= 0: pid=3680106: Mon Jul 15 21:53:35 2024 00:15:42.101 read: IOPS=597, BW=2390KiB/s (2447kB/s)(2440KiB/1021msec) 00:15:42.101 slat (nsec): min=7368, max=25001, avg=8541.42, stdev=2432.83 00:15:42.101 clat (usec): min=234, max=41420, avg=1238.95, stdev=6098.81 00:15:42.101 lat (usec): min=242, max=41428, avg=1247.49, stdev=6099.35 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:15:42.101 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:15:42.101 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 396], 95.00th=[ 474], 00:15:42.101 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:42.101 | 99.99th=[41681] 00:15:42.101 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:15:42.101 slat (usec): min=10, max=25829, avg=37.39, stdev=806.80 00:15:42.101 clat (usec): min=142, max=517, avg=210.93, stdev=48.25 00:15:42.101 lat (usec): min=154, max=26272, avg=248.32, stdev=815.49 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 172], 20.00th=[ 178], 00:15:42.101 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 204], 00:15:42.101 | 70.00th=[ 221], 80.00th=[ 239], 90.00th=[ 273], 95.00th=[ 314], 00:15:42.101 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 453], 99.95th=[ 519], 00:15:42.101 | 99.99th=[ 519] 00:15:42.101 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=2 00:15:42.101 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:15:42.101 lat (usec) : 250=53.30%, 500=45.17%, 750=0.61% 00:15:42.101 lat (msec) : 2=0.06%, 50=0.86% 00:15:42.101 cpu : usr=1.47%, sys=2.55%, ctx=1636, majf=0, minf=2 00:15:42.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 issued rwts: total=610,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.101 job2: (groupid=0, jobs=1): err= 0: pid=3680107: Mon Jul 15 21:53:35 2024 00:15:42.101 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:15:42.101 slat (nsec): min=9933, max=24185, avg=22298.86, stdev=2984.95 00:15:42.101 clat (usec): min=40882, max=41986, avg=41084.20, stdev=314.77 00:15:42.101 lat (usec): min=40905, max=42009, avg=41106.50, stdev=314.02 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:42.101 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:42.101 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:15:42.101 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:42.101 | 99.99th=[42206] 00:15:42.101 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:15:42.101 slat (nsec): min=10809, max=37493, avg=12271.75, stdev=1934.02 00:15:42.101 clat (usec): min=186, max=500, avg=233.07, stdev=27.94 00:15:42.101 lat (usec): min=198, max=538, avg=245.34, stdev=28.55 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:15:42.101 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:15:42.101 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 285], 00:15:42.101 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 502], 99.95th=[ 502], 00:15:42.101 | 99.99th=[ 502] 00:15:42.101 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:15:42.101 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:42.101 lat (usec) : 250=77.53%, 500=18.16%, 750=0.19% 00:15:42.101 lat (msec) : 50=4.12% 00:15:42.101 cpu : usr=0.10%, sys=1.26%, ctx=535, majf=0, minf=1 00:15:42.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.101 job3: (groupid=0, jobs=1): err= 0: pid=3680108: Mon Jul 15 21:53:35 2024 00:15:42.101 read: IOPS=643, BW=2573KiB/s (2635kB/s)(2640KiB/1026msec) 00:15:42.101 slat (nsec): min=7272, max=38235, avg=8545.80, stdev=2348.28 00:15:42.101 clat (usec): min=265, max=41332, avg=1174.53, stdev=5868.36 00:15:42.101 lat (usec): min=275, max=41341, avg=1183.07, stdev=5869.30 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:15:42.101 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:15:42.101 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 392], 95.00th=[ 424], 00:15:42.101 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:42.101 | 99.99th=[41157] 00:15:42.101 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:15:42.101 slat (nsec): min=5494, max=42841, avg=10769.18, stdev=2789.35 00:15:42.101 clat (usec): min=160, max=651, avg=223.26, stdev=46.14 00:15:42.101 lat (usec): min=170, max=691, avg=234.03, stdev=46.18 00:15:42.101 clat percentiles (usec): 00:15:42.101 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:15:42.101 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 221], 00:15:42.101 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 318], 00:15:42.101 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 482], 99.95th=[ 652], 00:15:42.101 | 99.99th=[ 652] 00:15:42.101 bw ( KiB/s): min= 944, max= 7248, per=34.40%, avg=4096.00, stdev=4457.60, samples=2 00:15:42.101 iops : min= 236, max= 1812, avg=1024.00, stdev=1114.40, samples=2 00:15:42.101 lat (usec) : 250=47.86%, 500=51.25%, 750=0.06% 00:15:42.101 lat (msec) : 50=0.83% 00:15:42.101 cpu : usr=1.27%, sys=2.54%, ctx=1684, majf=0, minf=1 00:15:42.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.101 issued rwts: total=660,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.101 00:15:42.101 Run status group 0 (all jobs): 00:15:42.101 READ: bw=6736KiB/s (6898kB/s), 85.3KiB/s-2573KiB/s (87.3kB/s-2635kB/s), io=6952KiB (7119kB), run=1020-1032msec 00:15:42.101 WRITE: bw=11.6MiB/s (12.2MB/s), 1984KiB/s-4012KiB/s (2032kB/s-4108kB/s), io=12.0MiB (12.6MB), run=1020-1032msec 00:15:42.101 00:15:42.101 Disk stats (read/write): 00:15:42.101 nvme0n1: ios=461/512, merge=0/0, ticks=1708/110, in_queue=1818, util=98.20% 00:15:42.101 nvme0n2: ios=619/1024, merge=0/0, ticks=1567/206, in_queue=1773, util=98.68% 00:15:42.101 nvme0n3: ios=45/512, merge=0/0, ticks=1690/111, in_queue=1801, util=98.85% 00:15:42.101 nvme0n4: ios=655/1024, merge=0/0, ticks=567/219, in_queue=786, util=89.82% 00:15:42.101 21:53:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:42.101 [global] 00:15:42.101 thread=1 00:15:42.101 invalidate=1 00:15:42.101 rw=randwrite 00:15:42.101 time_based=1 00:15:42.101 runtime=1 00:15:42.101 ioengine=libaio 00:15:42.101 direct=1 00:15:42.101 bs=4096 00:15:42.101 iodepth=1 00:15:42.101 norandommap=0 00:15:42.101 numjobs=1 00:15:42.101 00:15:42.101 verify_dump=1 00:15:42.101 verify_backlog=512 00:15:42.101 verify_state_save=0 00:15:42.101 do_verify=1 00:15:42.101 verify=crc32c-intel 00:15:42.101 [job0] 00:15:42.101 filename=/dev/nvme0n1 00:15:42.101 [job1] 00:15:42.101 filename=/dev/nvme0n2 00:15:42.101 [job2] 00:15:42.101 filename=/dev/nvme0n3 00:15:42.101 [job3] 00:15:42.101 filename=/dev/nvme0n4 00:15:42.101 Could not set queue depth (nvme0n1) 00:15:42.101 Could not set queue depth (nvme0n2) 00:15:42.101 Could not set queue depth (nvme0n3) 00:15:42.101 Could not set queue depth (nvme0n4) 00:15:42.101 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.101 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.101 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.101 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.101 fio-3.35 00:15:42.101 Starting 4 threads 00:15:43.477 00:15:43.477 job0: (groupid=0, jobs=1): err= 0: pid=3680480: Mon Jul 15 21:53:37 2024 00:15:43.477 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:15:43.477 slat (nsec): min=10129, max=24005, avg=22911.50, stdev=2864.52 00:15:43.477 clat (usec): min=40882, max=42087, avg=41013.91, stdev=243.91 00:15:43.477 lat (usec): min=40906, max=42110, avg=41036.82, stdev=244.04 00:15:43.477 clat percentiles (usec): 00:15:43.477 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:43.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:43.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:43.477 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:43.477 | 99.99th=[42206] 00:15:43.477 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:15:43.477 slat (nsec): min=8607, max=68216, avg=11578.07, stdev=2936.50 00:15:43.477 clat (usec): min=166, max=402, avg=204.30, stdev=21.21 00:15:43.477 lat (usec): min=176, max=470, avg=215.88, stdev=22.40 00:15:43.477 clat percentiles (usec): 00:15:43.477 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:15:43.477 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:15:43.477 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 239], 00:15:43.477 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 404], 99.95th=[ 404], 00:15:43.477 | 99.99th=[ 404] 00:15:43.477 bw ( KiB/s): min= 4096, max= 4096, per=22.90%, avg=4096.00, stdev= 0.00, samples=1 00:15:43.477 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:43.477 lat (usec) : 250=94.19%, 500=1.69% 00:15:43.477 lat (msec) : 50=4.12% 00:15:43.477 cpu : usr=0.30%, sys=0.99%, ctx=535, majf=0, minf=1 00:15:43.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.477 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.477 job1: (groupid=0, jobs=1): err= 0: pid=3680481: Mon Jul 15 21:53:37 2024 00:15:43.477 read: IOPS=23, BW=93.7KiB/s (95.9kB/s)(96.0KiB/1025msec) 00:15:43.477 slat (nsec): min=9407, max=31668, avg=19133.96, stdev=5563.02 00:15:43.477 clat (usec): min=437, max=41074, avg=37578.58, stdev=11434.58 00:15:43.477 lat (usec): min=461, max=41084, avg=37597.72, stdev=11431.91 00:15:43.477 clat percentiles (usec): 00:15:43.477 | 1.00th=[ 437], 5.00th=[ 469], 10.00th=[40633], 20.00th=[40633], 00:15:43.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:43.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:43.477 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:43.477 | 99.99th=[41157] 00:15:43.477 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:15:43.477 slat (nsec): min=10004, max=37057, avg=11523.24, stdev=2244.38 00:15:43.477 clat (usec): min=165, max=984, avg=224.47, stdev=50.63 00:15:43.477 lat (usec): min=176, max=994, avg=235.99, stdev=50.84 00:15:43.477 clat percentiles (usec): 00:15:43.477 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:15:43.477 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 212], 60.00th=[ 237], 00:15:43.477 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:15:43.477 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 988], 99.95th=[ 988], 00:15:43.477 | 99.99th=[ 988] 00:15:43.477 bw ( KiB/s): min= 4096, max= 4096, per=22.90%, avg=4096.00, stdev= 0.00, samples=1 00:15:43.477 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:43.477 lat (usec) : 250=66.04%, 500=29.66%, 1000=0.19% 00:15:43.477 lat (msec) : 50=4.10% 00:15:43.477 cpu : usr=0.20%, sys=1.07%, ctx=536, majf=0, minf=1 00:15:43.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.477 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.477 job2: (groupid=0, jobs=1): err= 0: pid=3680482: Mon Jul 15 21:53:37 2024 00:15:43.477 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:43.477 slat (nsec): min=6763, max=35104, avg=8742.76, stdev=1691.15 00:15:43.478 clat (usec): min=271, max=513, avg=356.81, stdev=37.18 00:15:43.478 lat (usec): min=279, max=521, avg=365.55, stdev=37.56 00:15:43.478 clat percentiles (usec): 00:15:43.478 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 322], 00:15:43.478 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 371], 00:15:43.478 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 408], 00:15:43.478 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 515], 00:15:43.478 | 99.99th=[ 515] 00:15:43.478 write: IOPS=1900, BW=7600KiB/s (7783kB/s)(7608KiB/1001msec); 0 zone resets 00:15:43.478 slat (nsec): min=8669, max=35587, avg=11872.02, stdev=1959.68 00:15:43.478 clat (usec): min=168, max=433, avg=212.90, stdev=27.71 00:15:43.478 lat (usec): min=177, max=445, avg=224.77, stdev=28.44 00:15:43.478 clat percentiles (usec): 00:15:43.478 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:15:43.478 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 212], 00:15:43.478 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 265], 00:15:43.478 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 388], 99.95th=[ 433], 00:15:43.478 | 99.99th=[ 433] 00:15:43.478 bw ( KiB/s): min= 8192, max= 8192, per=45.80%, avg=8192.00, stdev= 0.00, samples=1 00:15:43.478 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:43.478 lat (usec) : 250=48.66%, 500=51.25%, 750=0.09% 00:15:43.478 cpu : usr=2.90%, sys=5.80%, ctx=3438, majf=0, minf=1 00:15:43.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.478 issued rwts: total=1536,1902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.478 job3: (groupid=0, jobs=1): err= 0: pid=3680483: Mon Jul 15 21:53:37 2024 00:15:43.478 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:43.478 slat (nsec): min=2821, max=26175, avg=8356.22, stdev=1603.62 00:15:43.478 clat (usec): min=329, max=570, avg=386.13, stdev=30.78 00:15:43.478 lat (usec): min=338, max=578, avg=394.48, stdev=30.97 00:15:43.478 clat percentiles (usec): 00:15:43.478 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:15:43.478 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:15:43.478 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 453], 00:15:43.478 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 570], 00:15:43.478 | 99.99th=[ 570] 00:15:43.478 write: IOPS=1655, BW=6621KiB/s (6780kB/s)(6628KiB/1001msec); 0 zone resets 00:15:43.478 slat (nsec): min=10843, max=53803, avg=12221.14, stdev=2151.28 00:15:43.478 clat (usec): min=171, max=3756, avg=219.61, stdev=94.55 00:15:43.478 lat (usec): min=182, max=3771, avg=231.83, stdev=94.77 00:15:43.478 clat percentiles (usec): 00:15:43.478 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:15:43.478 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 210], 00:15:43.478 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 306], 00:15:43.478 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 3752], 00:15:43.478 | 99.99th=[ 3752] 00:15:43.478 bw ( KiB/s): min= 8192, max= 8192, per=45.80%, avg=8192.00, stdev= 0.00, samples=1 00:15:43.478 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:43.478 lat (usec) : 250=42.22%, 500=57.06%, 750=0.69% 00:15:43.478 lat (msec) : 4=0.03% 00:15:43.478 cpu : usr=2.20%, sys=5.70%, ctx=3195, majf=0, minf=2 00:15:43.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.478 issued rwts: total=1536,1657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.478 00:15:43.478 Run status group 0 (all jobs): 00:15:43.478 READ: bw=11.9MiB/s (12.5MB/s), 86.7KiB/s-6138KiB/s (88.8kB/s-6285kB/s), io=12.2MiB (12.8MB), run=1001-1025msec 00:15:43.478 WRITE: bw=17.5MiB/s (18.3MB/s), 1998KiB/s-7600KiB/s (2046kB/s-7783kB/s), io=17.9MiB (18.8MB), run=1001-1025msec 00:15:43.478 00:15:43.478 Disk stats (read/write): 00:15:43.478 nvme0n1: ios=33/512, merge=0/0, ticks=836/94, in_queue=930, util=88.18% 00:15:43.478 nvme0n2: ios=43/512, merge=0/0, ticks=729/109, in_queue=838, util=87.51% 00:15:43.478 nvme0n3: ios=1368/1536, merge=0/0, ticks=472/296, in_queue=768, util=89.06% 00:15:43.478 nvme0n4: ios=1293/1536, merge=0/0, ticks=951/308, in_queue=1259, util=96.64% 00:15:43.478 21:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:43.478 [global] 00:15:43.478 thread=1 00:15:43.478 invalidate=1 00:15:43.478 rw=write 00:15:43.478 time_based=1 00:15:43.478 runtime=1 00:15:43.478 ioengine=libaio 00:15:43.478 direct=1 00:15:43.478 bs=4096 00:15:43.478 iodepth=128 00:15:43.478 norandommap=0 00:15:43.478 numjobs=1 00:15:43.478 00:15:43.478 verify_dump=1 00:15:43.478 verify_backlog=512 00:15:43.478 verify_state_save=0 00:15:43.478 do_verify=1 00:15:43.478 verify=crc32c-intel 00:15:43.478 [job0] 00:15:43.478 filename=/dev/nvme0n1 00:15:43.478 [job1] 00:15:43.478 filename=/dev/nvme0n2 00:15:43.478 [job2] 00:15:43.478 filename=/dev/nvme0n3 00:15:43.478 [job3] 00:15:43.478 filename=/dev/nvme0n4 00:15:43.478 Could not set queue depth (nvme0n1) 00:15:43.478 Could not set queue depth (nvme0n2) 00:15:43.478 Could not set queue depth (nvme0n3) 00:15:43.478 Could not set queue depth (nvme0n4) 00:15:43.737 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.737 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.737 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.737 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.737 fio-3.35 00:15:43.737 Starting 4 threads 00:15:45.176 00:15:45.176 job0: (groupid=0, jobs=1): err= 0: pid=3680854: Mon Jul 15 21:53:39 2024 00:15:45.176 read: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec) 00:15:45.176 slat (nsec): min=1072, max=15468k, avg=98047.89, stdev=655659.19 00:15:45.176 clat (usec): min=2940, max=30518, avg=12545.34, stdev=4095.14 00:15:45.176 lat (usec): min=4220, max=30545, avg=12643.39, stdev=4129.87 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 5604], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9896], 00:15:45.176 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:15:45.176 | 70.00th=[13042], 80.00th=[14877], 90.00th=[18220], 95.00th=[21627], 00:15:45.176 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:15:45.176 | 99.99th=[30540] 00:15:45.176 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:45.176 slat (nsec): min=1877, max=10606k, avg=99607.74, stdev=539842.99 00:15:45.176 clat (usec): min=423, max=42866, avg=13212.80, stdev=5881.87 00:15:45.176 lat (usec): min=1441, max=42876, avg=13312.40, stdev=5918.58 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 4817], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[10028], 00:15:45.176 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[11863], 00:15:45.176 | 70.00th=[12649], 80.00th=[16581], 90.00th=[20317], 95.00th=[25035], 00:15:45.176 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:15:45.176 | 99.99th=[42730] 00:15:45.176 bw ( KiB/s): min=20432, max=20480, per=28.83%, avg=20456.00, stdev=33.94, samples=2 00:15:45.176 iops : min= 5108, max= 5120, avg=5114.00, stdev= 8.49, samples=2 00:15:45.176 lat (usec) : 500=0.01%, 750=0.01% 00:15:45.176 lat (msec) : 2=0.10%, 4=0.17%, 10=19.46%, 20=71.73%, 50=8.51% 00:15:45.176 cpu : usr=3.69%, sys=4.19%, ctx=479, majf=0, minf=1 00:15:45.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:45.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.176 issued rwts: total=4729,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.176 job1: (groupid=0, jobs=1): err= 0: pid=3680855: Mon Jul 15 21:53:39 2024 00:15:45.176 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:15:45.176 slat (nsec): min=1107, max=14297k, avg=136948.25, stdev=863002.62 00:15:45.176 clat (usec): min=5692, max=70065, avg=17824.96, stdev=7044.24 00:15:45.176 lat (usec): min=5698, max=77886, avg=17961.91, stdev=7112.33 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11731], 00:15:45.176 | 30.00th=[12780], 40.00th=[14091], 50.00th=[16319], 60.00th=[17957], 00:15:45.176 | 70.00th=[20841], 80.00th=[23725], 90.00th=[28967], 95.00th=[31589], 00:15:45.176 | 99.00th=[35914], 99.50th=[36439], 99.90th=[69731], 99.95th=[69731], 00:15:45.176 | 99.99th=[69731] 00:15:45.176 write: IOPS=4109, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:15:45.176 slat (nsec): min=1906, max=7301.6k, avg=95944.60, stdev=539699.25 00:15:45.176 clat (usec): min=823, max=31499, avg=13170.55, stdev=4804.71 00:15:45.176 lat (usec): min=832, max=31536, avg=13266.49, stdev=4852.84 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 3228], 5.00th=[ 6783], 10.00th=[ 8225], 20.00th=[ 9634], 00:15:45.176 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11731], 60.00th=[13173], 00:15:45.176 | 70.00th=[16057], 80.00th=[18220], 90.00th=[20055], 95.00th=[21365], 00:15:45.176 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:15:45.176 | 99.99th=[31589] 00:15:45.176 bw ( KiB/s): min=14584, max=18184, per=23.09%, avg=16384.00, stdev=2545.58, samples=2 00:15:45.176 iops : min= 3646, max= 4546, avg=4096.00, stdev=636.40, samples=2 00:15:45.176 lat (usec) : 1000=0.04% 00:15:45.176 lat (msec) : 2=0.18%, 4=0.50%, 10=16.20%, 20=62.41%, 50=20.61% 00:15:45.176 lat (msec) : 100=0.06% 00:15:45.176 cpu : usr=2.99%, sys=4.58%, ctx=356, majf=0, minf=1 00:15:45.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:45.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.176 issued rwts: total=4096,4130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.176 job2: (groupid=0, jobs=1): err= 0: pid=3680856: Mon Jul 15 21:53:39 2024 00:15:45.176 read: IOPS=4529, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1009msec) 00:15:45.176 slat (nsec): min=1385, max=14515k, avg=110785.01, stdev=803558.48 00:15:45.176 clat (usec): min=3601, max=40788, avg=13868.58, stdev=4513.44 00:15:45.176 lat (usec): min=4751, max=40791, avg=13979.36, stdev=4572.18 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 6849], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10814], 00:15:45.176 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12387], 60.00th=[13435], 00:15:45.176 | 70.00th=[14615], 80.00th=[16909], 90.00th=[19792], 95.00th=[23462], 00:15:45.176 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30802], 99.95th=[31065], 00:15:45.176 | 99.99th=[40633] 00:15:45.176 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:15:45.176 slat (usec): min=2, max=11035, avg=98.75, stdev=556.40 00:15:45.176 clat (usec): min=661, max=46364, avg=14008.38, stdev=6268.59 00:15:45.176 lat (usec): min=1229, max=46378, avg=14107.13, stdev=6300.77 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 3228], 5.00th=[ 6652], 10.00th=[ 7767], 20.00th=[10028], 00:15:45.176 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[13829], 00:15:45.176 | 70.00th=[16909], 80.00th=[18482], 90.00th=[20317], 95.00th=[23200], 00:15:45.176 | 99.00th=[40633], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:15:45.176 | 99.99th=[46400] 00:15:45.176 bw ( KiB/s): min=18376, max=18488, per=25.97%, avg=18432.00, stdev=79.20, samples=2 00:15:45.176 iops : min= 4594, max= 4622, avg=4608.00, stdev=19.80, samples=2 00:15:45.176 lat (usec) : 750=0.01% 00:15:45.176 lat (msec) : 2=0.20%, 4=0.66%, 10=17.45%, 20=72.14%, 50=9.53% 00:15:45.176 cpu : usr=3.77%, sys=5.26%, ctx=506, majf=0, minf=1 00:15:45.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:45.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.176 issued rwts: total=4570,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.176 job3: (groupid=0, jobs=1): err= 0: pid=3680857: Mon Jul 15 21:53:39 2024 00:15:45.176 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:15:45.176 slat (nsec): min=1089, max=25031k, avg=139268.06, stdev=1074620.78 00:15:45.176 clat (usec): min=4166, max=84299, avg=16797.35, stdev=12165.36 00:15:45.176 lat (usec): min=4169, max=84323, avg=16936.62, stdev=12268.37 00:15:45.176 clat percentiles (usec): 00:15:45.176 | 1.00th=[ 4817], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[11207], 00:15:45.176 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:15:45.176 | 70.00th=[15008], 80.00th=[17171], 90.00th=[34341], 95.00th=[47449], 00:15:45.176 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[81265], 00:15:45.176 | 99.99th=[84411] 00:15:45.176 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1009msec); 0 zone resets 00:15:45.177 slat (nsec): min=1913, max=7844.5k, avg=121278.97, stdev=499825.48 00:15:45.177 clat (usec): min=2754, max=49885, avg=16697.87, stdev=7160.88 00:15:45.177 lat (usec): min=4208, max=49887, avg=16819.15, stdev=7180.51 00:15:45.177 clat percentiles (usec): 00:15:45.177 | 1.00th=[ 6259], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[11207], 00:15:45.177 | 30.00th=[11600], 40.00th=[13042], 50.00th=[16188], 60.00th=[17957], 00:15:45.177 | 70.00th=[18744], 80.00th=[20317], 90.00th=[23200], 95.00th=[31589], 00:15:45.177 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:15:45.177 | 99.99th=[50070] 00:15:45.177 bw ( KiB/s): min=12880, max=18432, per=22.06%, avg=15656.00, stdev=3925.86, samples=2 00:15:45.177 iops : min= 3220, max= 4608, avg=3914.00, stdev=981.46, samples=2 00:15:45.177 lat (msec) : 4=0.01%, 10=11.03%, 20=70.67%, 50=16.08%, 100=2.22% 00:15:45.177 cpu : usr=1.69%, sys=3.37%, ctx=542, majf=0, minf=1 00:15:45.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:45.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.177 issued rwts: total=3584,4042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.177 00:15:45.177 Run status group 0 (all jobs): 00:15:45.177 READ: bw=65.7MiB/s (68.9MB/s), 13.9MiB/s-18.4MiB/s (14.5MB/s-19.3MB/s), io=66.3MiB (69.5MB), run=1004-1009msec 00:15:45.177 WRITE: bw=69.3MiB/s (72.7MB/s), 15.6MiB/s-19.9MiB/s (16.4MB/s-20.9MB/s), io=69.9MiB (73.3MB), run=1004-1009msec 00:15:45.177 00:15:45.177 Disk stats (read/write): 00:15:45.177 nvme0n1: ios=4137/4133, merge=0/0, ticks=24076/24885, in_queue=48961, util=98.90% 00:15:45.177 nvme0n2: ios=3605/3695, merge=0/0, ticks=28277/17926, in_queue=46203, util=98.99% 00:15:45.177 nvme0n3: ios=3630/4096, merge=0/0, ticks=42393/48335, in_queue=90728, util=99.17% 00:15:45.177 nvme0n4: ios=3329/3584, merge=0/0, ticks=24007/24658, in_queue=48665, util=98.22% 00:15:45.177 21:53:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:45.177 [global] 00:15:45.177 thread=1 00:15:45.177 invalidate=1 00:15:45.177 rw=randwrite 00:15:45.177 time_based=1 00:15:45.177 runtime=1 00:15:45.177 ioengine=libaio 00:15:45.177 direct=1 00:15:45.177 bs=4096 00:15:45.177 iodepth=128 00:15:45.177 norandommap=0 00:15:45.177 numjobs=1 00:15:45.177 00:15:45.177 verify_dump=1 00:15:45.177 verify_backlog=512 00:15:45.177 verify_state_save=0 00:15:45.177 do_verify=1 00:15:45.177 verify=crc32c-intel 00:15:45.177 [job0] 00:15:45.177 filename=/dev/nvme0n1 00:15:45.177 [job1] 00:15:45.177 filename=/dev/nvme0n2 00:15:45.177 [job2] 00:15:45.177 filename=/dev/nvme0n3 00:15:45.177 [job3] 00:15:45.177 filename=/dev/nvme0n4 00:15:45.177 Could not set queue depth (nvme0n1) 00:15:45.177 Could not set queue depth (nvme0n2) 00:15:45.177 Could not set queue depth (nvme0n3) 00:15:45.177 Could not set queue depth (nvme0n4) 00:15:45.177 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.177 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.177 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.177 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.177 fio-3.35 00:15:45.177 Starting 4 threads 00:15:46.549 00:15:46.549 job0: (groupid=0, jobs=1): err= 0: pid=3681229: Mon Jul 15 21:53:40 2024 00:15:46.549 read: IOPS=5334, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1005msec) 00:15:46.549 slat (nsec): min=1071, max=10192k, avg=95460.90, stdev=678419.50 00:15:46.549 clat (usec): min=2883, max=31916, avg=12895.16, stdev=4156.27 00:15:46.549 lat (usec): min=2888, max=31922, avg=12990.62, stdev=4193.37 00:15:46.549 clat percentiles (usec): 00:15:46.549 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:15:46.549 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:15:46.549 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19530], 95.00th=[20841], 00:15:46.549 | 99.00th=[25822], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:15:46.549 | 99.99th=[31851] 00:15:46.549 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:15:46.549 slat (nsec): min=1761, max=11846k, avg=74792.04, stdev=491782.16 00:15:46.549 clat (usec): min=1417, max=33478, avg=10243.89, stdev=4280.24 00:15:46.549 lat (usec): min=1429, max=33486, avg=10318.68, stdev=4302.00 00:15:46.549 clat percentiles (usec): 00:15:46.549 | 1.00th=[ 3359], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 7177], 00:15:46.549 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10290], 00:15:46.549 | 70.00th=[11207], 80.00th=[11863], 90.00th=[14222], 95.00th=[17957], 00:15:46.549 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32113], 99.95th=[33424], 00:15:46.549 | 99.99th=[33424] 00:15:46.549 bw ( KiB/s): min=20480, max=24576, per=29.97%, avg=22528.00, stdev=2896.31, samples=2 00:15:46.549 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:46.549 lat (msec) : 2=0.21%, 4=1.10%, 10=38.02%, 20=55.71%, 50=4.96% 00:15:46.549 cpu : usr=3.98%, sys=5.48%, ctx=514, majf=0, minf=1 00:15:46.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:46.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.549 issued rwts: total=5361,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.550 job1: (groupid=0, jobs=1): err= 0: pid=3681230: Mon Jul 15 21:53:40 2024 00:15:46.550 read: IOPS=4599, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:15:46.550 slat (nsec): min=988, max=16071k, avg=108112.23, stdev=695147.89 00:15:46.550 clat (usec): min=2939, max=44675, avg=13498.44, stdev=5741.72 00:15:46.550 lat (usec): min=3586, max=55959, avg=13606.56, stdev=5785.56 00:15:46.550 clat percentiles (usec): 00:15:46.550 | 1.00th=[ 5800], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9896], 00:15:46.550 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12649], 00:15:46.550 | 70.00th=[13960], 80.00th=[15926], 90.00th=[22152], 95.00th=[24773], 00:15:46.550 | 99.00th=[35390], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:46.550 | 99.99th=[44827] 00:15:46.550 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:46.550 slat (nsec): min=1705, max=12518k, avg=93241.74, stdev=510343.29 00:15:46.550 clat (usec): min=1268, max=69410, avg=12665.65, stdev=8903.37 00:15:46.550 lat (usec): min=1311, max=69412, avg=12758.89, stdev=8941.66 00:15:46.550 clat percentiles (usec): 00:15:46.550 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 7898], 20.00th=[ 9503], 00:15:46.550 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10552], 60.00th=[11207], 00:15:46.550 | 70.00th=[12518], 80.00th=[12911], 90.00th=[16057], 95.00th=[21103], 00:15:46.550 | 99.00th=[64226], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:15:46.550 | 99.99th=[69731] 00:15:46.550 bw ( KiB/s): min=16384, max=23640, per=26.63%, avg=20012.00, stdev=5130.77, samples=2 00:15:46.550 iops : min= 4096, max= 5910, avg=5003.00, stdev=1282.69, samples=2 00:15:46.550 lat (msec) : 2=0.08%, 4=0.23%, 10=26.76%, 20=64.67%, 50=7.20% 00:15:46.550 lat (msec) : 100=1.06% 00:15:46.550 cpu : usr=3.39%, sys=3.59%, ctx=608, majf=0, minf=1 00:15:46.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.550 issued rwts: total=4618,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.550 job2: (groupid=0, jobs=1): err= 0: pid=3681231: Mon Jul 15 21:53:40 2024 00:15:46.550 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:15:46.550 slat (nsec): min=1197, max=19300k, avg=96498.58, stdev=682434.69 00:15:46.550 clat (usec): min=4694, max=96601, avg=14847.32, stdev=9751.29 00:15:46.550 lat (usec): min=6839, max=96609, avg=14943.82, stdev=9776.84 00:15:46.550 clat percentiles (usec): 00:15:46.550 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:15:46.550 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:15:46.550 | 70.00th=[14091], 80.00th=[14484], 90.00th=[17433], 95.00th=[28705], 00:15:46.550 | 99.00th=[73925], 99.50th=[82314], 99.90th=[96994], 99.95th=[96994], 00:15:46.550 | 99.99th=[96994] 00:15:46.550 write: IOPS=3427, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1009msec); 0 zone resets 00:15:46.550 slat (usec): min=2, max=44792, avg=192.37, stdev=1258.13 00:15:46.550 clat (msec): min=3, max=116, avg=23.16, stdev=27.55 00:15:46.550 lat (msec): min=5, max=116, avg=23.35, stdev=27.75 00:15:46.550 clat percentiles (msec): 00:15:46.550 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:15:46.550 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:15:46.550 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 70], 95.00th=[ 105], 00:15:46.550 | 99.00th=[ 114], 99.50th=[ 114], 99.90th=[ 117], 99.95th=[ 117], 00:15:46.550 | 99.99th=[ 117] 00:15:46.550 bw ( KiB/s): min= 9808, max=16832, per=17.72%, avg=13320.00, stdev=4966.72, samples=2 00:15:46.550 iops : min= 2452, max= 4208, avg=3330.00, stdev=1241.68, samples=2 00:15:46.550 lat (msec) : 4=0.02%, 10=10.25%, 20=77.44%, 50=4.61%, 100=4.27% 00:15:46.550 lat (msec) : 250=3.42% 00:15:46.550 cpu : usr=3.08%, sys=4.07%, ctx=255, majf=0, minf=1 00:15:46.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.550 issued rwts: total=3072,3458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.550 job3: (groupid=0, jobs=1): err= 0: pid=3681232: Mon Jul 15 21:53:40 2024 00:15:46.550 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:15:46.550 slat (nsec): min=1176, max=11907k, avg=100676.56, stdev=699756.94 00:15:46.550 clat (usec): min=4311, max=44651, avg=13582.57, stdev=4887.98 00:15:46.550 lat (usec): min=4325, max=44719, avg=13683.25, stdev=4923.46 00:15:46.550 clat percentiles (usec): 00:15:46.550 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11600], 00:15:46.550 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:15:46.550 | 70.00th=[13173], 80.00th=[14484], 90.00th=[16319], 95.00th=[22938], 00:15:46.550 | 99.00th=[37487], 99.50th=[40633], 99.90th=[40633], 99.95th=[44827], 00:15:46.550 | 99.99th=[44827] 00:15:46.550 write: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1007msec); 0 zone resets 00:15:46.550 slat (usec): min=2, max=10314, avg=102.87, stdev=681.13 00:15:46.550 clat (usec): min=1791, max=32977, avg=13245.39, stdev=4809.14 00:15:46.550 lat (usec): min=1801, max=32982, avg=13348.26, stdev=4848.87 00:15:46.550 clat percentiles (usec): 00:15:46.550 | 1.00th=[ 6390], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 9634], 00:15:46.550 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12649], 60.00th=[13173], 00:15:46.550 | 70.00th=[13698], 80.00th=[15795], 90.00th=[20055], 95.00th=[22676], 00:15:46.550 | 99.00th=[30278], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:15:46.550 | 99.99th=[32900] 00:15:46.550 bw ( KiB/s): min=16432, max=20568, per=24.61%, avg=18500.00, stdev=2924.59, samples=2 00:15:46.550 iops : min= 4108, max= 5142, avg=4625.00, stdev=731.15, samples=2 00:15:46.550 lat (msec) : 2=0.10%, 10=16.42%, 20=75.38%, 50=8.11% 00:15:46.550 cpu : usr=3.48%, sys=5.47%, ctx=319, majf=0, minf=1 00:15:46.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.550 issued rwts: total=4608,4749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.550 00:15:46.550 Run status group 0 (all jobs): 00:15:46.550 READ: bw=68.4MiB/s (71.7MB/s), 11.9MiB/s-20.8MiB/s (12.5MB/s-21.8MB/s), io=69.0MiB (72.3MB), run=1004-1009msec 00:15:46.550 WRITE: bw=73.4MiB/s (77.0MB/s), 13.4MiB/s-21.9MiB/s (14.0MB/s-23.0MB/s), io=74.1MiB (77.7MB), run=1004-1009msec 00:15:46.550 00:15:46.550 Disk stats (read/write): 00:15:46.550 nvme0n1: ios=4397/4608, merge=0/0, ticks=47585/40155, in_queue=87740, util=86.96% 00:15:46.550 nvme0n2: ios=3740/4096, merge=0/0, ticks=20938/21804, in_queue=42742, util=85.16% 00:15:46.550 nvme0n3: ios=3130/3231, merge=0/0, ticks=21826/26696, in_queue=48522, util=95.83% 00:15:46.550 nvme0n4: ios=3623/4096, merge=0/0, ticks=29695/32475, in_queue=62170, util=98.85% 00:15:46.550 21:53:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:46.550 21:53:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3681463 00:15:46.550 21:53:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:46.550 21:53:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:46.550 [global] 00:15:46.550 thread=1 00:15:46.550 invalidate=1 00:15:46.550 rw=read 00:15:46.550 time_based=1 00:15:46.550 runtime=10 00:15:46.550 ioengine=libaio 00:15:46.550 direct=1 00:15:46.550 bs=4096 00:15:46.550 iodepth=1 00:15:46.550 norandommap=1 00:15:46.550 numjobs=1 00:15:46.550 00:15:46.550 [job0] 00:15:46.550 filename=/dev/nvme0n1 00:15:46.550 [job1] 00:15:46.550 filename=/dev/nvme0n2 00:15:46.550 [job2] 00:15:46.550 filename=/dev/nvme0n3 00:15:46.550 [job3] 00:15:46.550 filename=/dev/nvme0n4 00:15:46.550 Could not set queue depth (nvme0n1) 00:15:46.550 Could not set queue depth (nvme0n2) 00:15:46.550 Could not set queue depth (nvme0n3) 00:15:46.550 Could not set queue depth (nvme0n4) 00:15:46.809 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.809 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.809 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.809 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.809 fio-3.35 00:15:46.809 Starting 4 threads 00:15:50.096 21:53:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:50.096 21:53:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:50.096 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=847872, buflen=4096 00:15:50.096 fio: pid=3681605, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.096 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26095616, buflen=4096 00:15:50.096 fio: pid=3681604, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.096 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.096 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:50.096 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.096 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:50.096 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=311296, buflen=4096 00:15:50.096 fio: pid=3681601, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.355 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2052096, buflen=4096 00:15:50.355 fio: pid=3681602, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:50.355 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.355 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:50.355 00:15:50.355 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3681601: Mon Jul 15 21:53:44 2024 00:15:50.355 read: IOPS=24, BW=97.7KiB/s (100.0kB/s)(304KiB/3113msec) 00:15:50.355 slat (usec): min=10, max=9853, avg=236.23, stdev=1346.86 00:15:50.355 clat (usec): min=502, max=41213, avg=40438.64, stdev=4642.76 00:15:50.355 lat (usec): min=562, max=47924, avg=40548.32, stdev=4715.99 00:15:50.355 clat percentiles (usec): 00:15:50.355 | 1.00th=[ 502], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:50.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:50.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:50.355 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:50.355 | 99.99th=[41157] 00:15:50.355 bw ( KiB/s): min= 92, max= 104, per=1.13%, avg=98.00, stdev= 4.90, samples=6 00:15:50.355 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:15:50.355 lat (usec) : 750=1.30% 00:15:50.355 lat (msec) : 50=97.40% 00:15:50.355 cpu : usr=0.00%, sys=0.13%, ctx=81, majf=0, minf=1 00:15:50.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.355 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3681602: Mon Jul 15 21:53:44 2024 00:15:50.355 read: IOPS=152, BW=610KiB/s (624kB/s)(2004KiB/3287msec) 00:15:50.355 slat (usec): min=6, max=16863, avg=64.76, stdev=888.83 00:15:50.355 clat (usec): min=262, max=44096, avg=6449.94, stdev=14584.23 00:15:50.355 lat (usec): min=269, max=58068, avg=6514.82, stdev=14687.79 00:15:50.355 clat percentiles (usec): 00:15:50.355 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:15:50.355 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 355], 00:15:50.355 | 70.00th=[ 469], 80.00th=[ 510], 90.00th=[41157], 95.00th=[41681], 00:15:50.355 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:15:50.355 | 99.99th=[44303] 00:15:50.355 bw ( KiB/s): min= 96, max= 1096, per=5.12%, avg=446.50, stdev=403.95, samples=6 00:15:50.355 iops : min= 24, max= 274, avg=111.50, stdev=101.09, samples=6 00:15:50.355 lat (usec) : 500=72.51%, 750=12.35% 00:15:50.355 lat (msec) : 50=14.94% 00:15:50.355 cpu : usr=0.06%, sys=0.15%, ctx=509, majf=0, minf=1 00:15:50.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.355 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3681604: Mon Jul 15 21:53:44 2024 00:15:50.355 read: IOPS=2195, BW=8782KiB/s (8992kB/s)(24.9MiB/2902msec) 00:15:50.355 slat (usec): min=6, max=11711, avg=10.72, stdev=190.91 00:15:50.355 clat (usec): min=250, max=41438, avg=440.30, stdev=2409.78 00:15:50.355 lat (usec): min=258, max=41446, avg=451.02, stdev=2418.10 00:15:50.355 clat percentiles (usec): 00:15:50.355 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:15:50.355 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 293], 00:15:50.355 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 306], 95.00th=[ 314], 00:15:50.355 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[41157], 99.95th=[41157], 00:15:50.355 | 99.99th=[41681] 00:15:50.355 bw ( KiB/s): min= 216, max=13392, per=95.56%, avg=8320.00, stdev=6856.92, samples=5 00:15:50.355 iops : min= 54, max= 3348, avg=2080.00, stdev=1714.23, samples=5 00:15:50.355 lat (usec) : 500=98.60%, 750=1.02% 00:15:50.355 lat (msec) : 50=0.36% 00:15:50.355 cpu : usr=0.79%, sys=1.93%, ctx=6374, majf=0, minf=1 00:15:50.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 issued rwts: total=6372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.355 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3681605: Mon Jul 15 21:53:44 2024 00:15:50.355 read: IOPS=76, BW=304KiB/s (311kB/s)(828KiB/2724msec) 00:15:50.355 slat (nsec): min=6705, max=67562, avg=12398.92, stdev=7774.38 00:15:50.355 clat (usec): min=283, max=42163, avg=13044.25, stdev=19005.04 00:15:50.355 lat (usec): min=290, max=42171, avg=13056.60, stdev=19011.78 00:15:50.355 clat percentiles (usec): 00:15:50.355 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 330], 00:15:50.355 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 445], 00:15:50.355 | 70.00th=[40633], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:50.355 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:50.355 | 99.99th=[42206] 00:15:50.355 bw ( KiB/s): min= 96, max= 688, per=3.71%, avg=323.20, stdev=304.71, samples=5 00:15:50.355 iops : min= 24, max= 172, avg=80.80, stdev=76.18, samples=5 00:15:50.355 lat (usec) : 500=68.75% 00:15:50.355 lat (msec) : 50=30.77% 00:15:50.355 cpu : usr=0.04%, sys=0.11%, ctx=208, majf=0, minf=2 00:15:50.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.355 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.355 00:15:50.355 Run status group 0 (all jobs): 00:15:50.355 READ: bw=8707KiB/s (8916kB/s), 97.7KiB/s-8782KiB/s (100.0kB/s-8992kB/s), io=27.9MiB (29.3MB), run=2724-3287msec 00:15:50.355 00:15:50.355 Disk stats (read/write): 00:15:50.355 nvme0n1: ios=77/0, merge=0/0, ticks=3084/0, in_queue=3084, util=95.13% 00:15:50.355 nvme0n2: ios=378/0, merge=0/0, ticks=3693/0, in_queue=3693, util=99.57% 00:15:50.355 nvme0n3: ios=6292/0, merge=0/0, ticks=2736/0, in_queue=2736, util=95.88% 00:15:50.355 nvme0n4: ios=204/0, merge=0/0, ticks=2575/0, in_queue=2575, util=96.49% 00:15:50.614 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.614 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:50.614 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.614 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:50.872 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:50.873 21:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3681463 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:51.131 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.389 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.389 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:51.390 nvmf hotplug test: fio failed as expected 00:15:51.390 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:51.648 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.649 rmmod nvme_tcp 00:15:51.649 rmmod nvme_fabrics 00:15:51.649 rmmod nvme_keyring 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3678716 ']' 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3678716 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3678716 ']' 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3678716 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678716 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678716' 00:15:51.649 killing process with pid 3678716 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3678716 00:15:51.649 21:53:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3678716 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.907 21:53:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.885 21:53:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.885 00:15:53.885 real 0m25.617s 00:15:53.885 user 1m45.749s 00:15:53.885 sys 0m6.945s 00:15:53.885 21:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.885 21:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.885 ************************************ 00:15:53.885 END TEST nvmf_fio_target 00:15:53.885 ************************************ 00:15:53.885 21:53:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.885 21:53:48 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:53.885 21:53:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.885 21:53:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.885 21:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.145 ************************************ 00:15:54.145 START TEST nvmf_bdevio 00:15:54.145 ************************************ 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:54.145 * Looking for test storage... 00:15:54.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.145 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.146 21:53:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:59.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:59.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:59.420 Found net devices under 0000:86:00.0: cvl_0_0 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:59.420 Found net devices under 0000:86:00.1: cvl_0_1 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.420 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:59.679 00:15:59.679 --- 10.0.0.2 ping statistics --- 00:15:59.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.679 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:15:59.679 00:15:59.679 --- 10.0.0.1 ping statistics --- 00:15:59.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.679 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3685855 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3685855 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3685855 ']' 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.679 21:53:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:59.679 [2024-07-15 21:53:53.892044] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:15:59.679 [2024-07-15 21:53:53.892085] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.679 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.937 [2024-07-15 21:53:53.950730] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.937 [2024-07-15 21:53:54.032889] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.937 [2024-07-15 21:53:54.032926] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.937 [2024-07-15 21:53:54.032933] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.937 [2024-07-15 21:53:54.032939] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.937 [2024-07-15 21:53:54.032944] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.937 [2024-07-15 21:53:54.033056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:59.937 [2024-07-15 21:53:54.033164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:59.937 [2024-07-15 21:53:54.033272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.937 [2024-07-15 21:53:54.033273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.504 [2024-07-15 21:53:54.728030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.504 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.762 Malloc0 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.762 [2024-07-15 21:53:54.771448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:00.762 { 00:16:00.762 "params": { 00:16:00.762 "name": "Nvme$subsystem", 00:16:00.762 "trtype": "$TEST_TRANSPORT", 00:16:00.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:00.762 "adrfam": "ipv4", 00:16:00.762 "trsvcid": "$NVMF_PORT", 00:16:00.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:00.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:00.762 "hdgst": ${hdgst:-false}, 00:16:00.762 "ddgst": ${ddgst:-false} 00:16:00.762 }, 00:16:00.762 "method": "bdev_nvme_attach_controller" 00:16:00.762 } 00:16:00.762 EOF 00:16:00.762 )") 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:00.762 21:53:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:00.762 "params": { 00:16:00.762 "name": "Nvme1", 00:16:00.762 "trtype": "tcp", 00:16:00.762 "traddr": "10.0.0.2", 00:16:00.762 "adrfam": "ipv4", 00:16:00.762 "trsvcid": "4420", 00:16:00.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.762 "hdgst": false, 00:16:00.762 "ddgst": false 00:16:00.762 }, 00:16:00.762 "method": "bdev_nvme_attach_controller" 00:16:00.762 }' 00:16:00.762 [2024-07-15 21:53:54.819448] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:16:00.762 [2024-07-15 21:53:54.819494] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3686085 ] 00:16:00.762 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.762 [2024-07-15 21:53:54.875220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.762 [2024-07-15 21:53:54.950850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.762 [2024-07-15 21:53:54.950947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.762 [2024-07-15 21:53:54.950949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.328 I/O targets: 00:16:01.328 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:01.328 00:16:01.328 00:16:01.328 CUnit - A unit testing framework for C - Version 2.1-3 00:16:01.328 http://cunit.sourceforge.net/ 00:16:01.328 00:16:01.328 00:16:01.328 Suite: bdevio tests on: Nvme1n1 00:16:01.328 Test: blockdev write read block ...passed 00:16:01.328 Test: blockdev write zeroes read block ...passed 00:16:01.328 Test: blockdev write zeroes read no split ...passed 00:16:01.328 Test: blockdev write zeroes read split ...passed 00:16:01.328 Test: blockdev write zeroes read split partial ...passed 00:16:01.328 Test: blockdev reset ...[2024-07-15 21:53:55.469036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:01.328 [2024-07-15 21:53:55.469097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c828c0 (9): Bad file descriptor 00:16:01.328 [2024-07-15 21:53:55.483774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:01.328 passed 00:16:01.328 Test: blockdev write read 8 blocks ...passed 00:16:01.328 Test: blockdev write read size > 128k ...passed 00:16:01.328 Test: blockdev write read invalid size ...passed 00:16:01.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:01.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:01.328 Test: blockdev write read max offset ...passed 00:16:01.587 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:01.587 Test: blockdev writev readv 8 blocks ...passed 00:16:01.587 Test: blockdev writev readv 30 x 1block ...passed 00:16:01.587 Test: blockdev writev readv block ...passed 00:16:01.587 Test: blockdev writev readv size > 128k ...passed 00:16:01.587 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:01.587 Test: blockdev comparev and writev ...[2024-07-15 21:53:55.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.741786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.741800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.741808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.742729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:01.587 [2024-07-15 21:53:55.742736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:01.587 passed 00:16:01.587 Test: blockdev nvme passthru rw ...passed 00:16:01.587 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:53:55.825672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.587 [2024-07-15 21:53:55.825690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.825842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.587 [2024-07-15 21:53:55.825853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.826004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.587 [2024-07-15 21:53:55.826014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:01.587 [2024-07-15 21:53:55.826166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.587 [2024-07-15 21:53:55.826175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:01.587 passed 00:16:01.845 Test: blockdev nvme admin passthru ...passed 00:16:01.845 Test: blockdev copy ...passed 00:16:01.845 00:16:01.845 Run Summary: Type Total Ran Passed Failed Inactive 00:16:01.845 suites 1 1 n/a 0 0 00:16:01.845 tests 23 23 23 0 0 00:16:01.845 asserts 152 152 152 0 n/a 00:16:01.845 00:16:01.845 Elapsed time = 1.236 seconds 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.845 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.845 rmmod nvme_tcp 00:16:01.845 rmmod nvme_fabrics 00:16:01.845 rmmod nvme_keyring 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3685855 ']' 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3685855 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3685855 ']' 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3685855 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3685855 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3685855' 00:16:02.105 killing process with pid 3685855 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3685855 00:16:02.105 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3685855 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.363 21:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.267 21:53:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.267 00:16:04.267 real 0m10.263s 00:16:04.267 user 0m13.313s 00:16:04.267 sys 0m4.738s 00:16:04.267 21:53:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.267 21:53:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 ************************************ 00:16:04.267 END TEST nvmf_bdevio 00:16:04.267 ************************************ 00:16:04.267 21:53:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.267 21:53:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.267 21:53:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.267 21:53:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.267 21:53:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 ************************************ 00:16:04.267 START TEST nvmf_auth_target 00:16:04.267 ************************************ 00:16:04.267 21:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.527 * Looking for test storage... 00:16:04.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.527 21:53:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.528 21:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:09.800 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:09.800 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:09.800 Found net devices under 0000:86:00.0: cvl_0_0 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:09.800 Found net devices under 0000:86:00.1: cvl_0_1 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.800 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:16:09.801 00:16:09.801 --- 10.0.0.2 ping statistics --- 00:16:09.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.801 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:16:09.801 00:16:09.801 --- 10.0.0.1 ping statistics --- 00:16:09.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.801 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3689740 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3689740 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3689740 ']' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.801 21:54:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3689973 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ffdc4d3c14f6ab7a03698ee3ebca74d12909dac34ede771c 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wuc 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ffdc4d3c14f6ab7a03698ee3ebca74d12909dac34ede771c 0 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ffdc4d3c14f6ab7a03698ee3ebca74d12909dac34ede771c 0 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ffdc4d3c14f6ab7a03698ee3ebca74d12909dac34ede771c 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wuc 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wuc 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.wuc 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77bb05214a312ae73605c578b05752a12c6319b8e6874cbb54fec5dd90bad75d 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6OB 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77bb05214a312ae73605c578b05752a12c6319b8e6874cbb54fec5dd90bad75d 3 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77bb05214a312ae73605c578b05752a12c6319b8e6874cbb54fec5dd90bad75d 3 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77bb05214a312ae73605c578b05752a12c6319b8e6874cbb54fec5dd90bad75d 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6OB 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6OB 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.6OB 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=329c6c5ee072f5be62fac9d25edef3ff 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Cf 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 329c6c5ee072f5be62fac9d25edef3ff 1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 329c6c5ee072f5be62fac9d25edef3ff 1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=329c6c5ee072f5be62fac9d25edef3ff 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Cf 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Cf 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8Cf 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:10.391 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:10.650 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ded8977399183cf8a0b3b442d6af8988dc7729fe1ae7fa4 00:16:10.650 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.edi 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ded8977399183cf8a0b3b442d6af8988dc7729fe1ae7fa4 2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ded8977399183cf8a0b3b442d6af8988dc7729fe1ae7fa4 2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ded8977399183cf8a0b3b442d6af8988dc7729fe1ae7fa4 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.edi 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.edi 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.edi 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7678d41dab8bd44062a1eb6140f103ce16160acd11f741cb 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.I5w 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7678d41dab8bd44062a1eb6140f103ce16160acd11f741cb 2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7678d41dab8bd44062a1eb6140f103ce16160acd11f741cb 2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7678d41dab8bd44062a1eb6140f103ce16160acd11f741cb 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.I5w 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.I5w 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.I5w 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=64170b0dd6e941f10cef024f5004d676 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.h5L 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 64170b0dd6e941f10cef024f5004d676 1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 64170b0dd6e941f10cef024f5004d676 1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=64170b0dd6e941f10cef024f5004d676 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.h5L 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.h5L 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.h5L 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0ceede3b639db46b83c60c4fcc1cec07d56423e84b63917823654c8032df32f4 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wEI 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0ceede3b639db46b83c60c4fcc1cec07d56423e84b63917823654c8032df32f4 3 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0ceede3b639db46b83c60c4fcc1cec07d56423e84b63917823654c8032df32f4 3 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0ceede3b639db46b83c60c4fcc1cec07d56423e84b63917823654c8032df32f4 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wEI 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wEI 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.wEI 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3689740 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3689740 ']' 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.651 21:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3689973 /var/tmp/host.sock 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3689973 ']' 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:10.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.910 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:16:11.168 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wuc 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.6OB ]] 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6OB 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6OB 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6OB 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Cf 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8Cf 00:16:11.428 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8Cf 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.edi ]] 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edi 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edi 00:16:11.687 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edi 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.I5w 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.I5w 00:16:11.946 21:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.I5w 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.h5L ]] 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.h5L 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.h5L 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.h5L 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wEI 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wEI 00:16:12.204 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wEI 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.463 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.722 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:12.722 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.722 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.722 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:12.722 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.723 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.723 00:16:12.981 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.981 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.981 21:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.981 { 00:16:12.981 "cntlid": 1, 00:16:12.981 "qid": 0, 00:16:12.981 "state": "enabled", 00:16:12.981 "thread": "nvmf_tgt_poll_group_000", 00:16:12.981 "listen_address": { 00:16:12.981 "trtype": "TCP", 00:16:12.981 "adrfam": "IPv4", 00:16:12.981 "traddr": "10.0.0.2", 00:16:12.981 "trsvcid": "4420" 00:16:12.981 }, 00:16:12.981 "peer_address": { 00:16:12.981 "trtype": "TCP", 00:16:12.981 "adrfam": "IPv4", 00:16:12.981 "traddr": "10.0.0.1", 00:16:12.981 "trsvcid": "58900" 00:16:12.981 }, 00:16:12.981 "auth": { 00:16:12.981 "state": "completed", 00:16:12.981 "digest": "sha256", 00:16:12.981 "dhgroup": "null" 00:16:12.981 } 00:16:12.981 } 00:16:12.981 ]' 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.981 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.242 21:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.836 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.095 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.353 00:16:14.353 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.353 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.353 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.611 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.611 { 00:16:14.611 "cntlid": 3, 00:16:14.611 "qid": 0, 00:16:14.611 "state": "enabled", 00:16:14.611 "thread": "nvmf_tgt_poll_group_000", 00:16:14.611 "listen_address": { 00:16:14.611 "trtype": "TCP", 00:16:14.611 "adrfam": "IPv4", 00:16:14.611 "traddr": "10.0.0.2", 00:16:14.611 "trsvcid": "4420" 00:16:14.611 }, 00:16:14.611 "peer_address": { 00:16:14.611 "trtype": "TCP", 00:16:14.611 "adrfam": "IPv4", 00:16:14.611 "traddr": "10.0.0.1", 00:16:14.611 "trsvcid": "58914" 00:16:14.611 }, 00:16:14.611 "auth": { 00:16:14.611 "state": "completed", 00:16:14.611 "digest": "sha256", 00:16:14.611 "dhgroup": "null" 00:16:14.611 } 00:16:14.611 } 00:16:14.612 ]' 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.612 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.870 21:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.437 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.697 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.955 00:16:15.955 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.955 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.955 21:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.955 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.955 { 00:16:15.955 "cntlid": 5, 00:16:15.955 "qid": 0, 00:16:15.955 "state": "enabled", 00:16:15.955 "thread": "nvmf_tgt_poll_group_000", 00:16:15.955 "listen_address": { 00:16:15.955 "trtype": "TCP", 00:16:15.955 "adrfam": "IPv4", 00:16:15.955 "traddr": "10.0.0.2", 00:16:15.955 "trsvcid": "4420" 00:16:15.955 }, 00:16:15.955 "peer_address": { 00:16:15.955 "trtype": "TCP", 00:16:15.955 "adrfam": "IPv4", 00:16:15.955 "traddr": "10.0.0.1", 00:16:15.955 "trsvcid": "58930" 00:16:15.955 }, 00:16:15.955 "auth": { 00:16:15.955 "state": "completed", 00:16:15.955 "digest": "sha256", 00:16:15.956 "dhgroup": "null" 00:16:15.956 } 00:16:15.956 } 00:16:15.956 ]' 00:16:15.956 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.214 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.473 21:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.040 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.299 00:16:17.299 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.299 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.299 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.557 { 00:16:17.557 "cntlid": 7, 00:16:17.557 "qid": 0, 00:16:17.557 "state": "enabled", 00:16:17.557 "thread": "nvmf_tgt_poll_group_000", 00:16:17.557 "listen_address": { 00:16:17.557 "trtype": "TCP", 00:16:17.557 "adrfam": "IPv4", 00:16:17.557 "traddr": "10.0.0.2", 00:16:17.557 "trsvcid": "4420" 00:16:17.557 }, 00:16:17.557 "peer_address": { 00:16:17.557 "trtype": "TCP", 00:16:17.557 "adrfam": "IPv4", 00:16:17.557 "traddr": "10.0.0.1", 00:16:17.557 "trsvcid": "58968" 00:16:17.557 }, 00:16:17.557 "auth": { 00:16:17.557 "state": "completed", 00:16:17.557 "digest": "sha256", 00:16:17.557 "dhgroup": "null" 00:16:17.557 } 00:16:17.557 } 00:16:17.557 ]' 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.557 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.816 21:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.381 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.382 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.640 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.898 00:16:18.898 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.898 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.898 21:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.898 { 00:16:18.898 "cntlid": 9, 00:16:18.898 "qid": 0, 00:16:18.898 "state": "enabled", 00:16:18.898 "thread": "nvmf_tgt_poll_group_000", 00:16:18.898 "listen_address": { 00:16:18.898 "trtype": "TCP", 00:16:18.898 "adrfam": "IPv4", 00:16:18.898 "traddr": "10.0.0.2", 00:16:18.898 "trsvcid": "4420" 00:16:18.898 }, 00:16:18.898 "peer_address": { 00:16:18.898 "trtype": "TCP", 00:16:18.898 "adrfam": "IPv4", 00:16:18.898 "traddr": "10.0.0.1", 00:16:18.898 "trsvcid": "58998" 00:16:18.898 }, 00:16:18.898 "auth": { 00:16:18.898 "state": "completed", 00:16:18.898 "digest": "sha256", 00:16:18.898 "dhgroup": "ffdhe2048" 00:16:18.898 } 00:16:18.898 } 00:16:18.898 ]' 00:16:18.898 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.172 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.429 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.996 21:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.996 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.253 00:16:20.253 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.253 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.253 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.510 { 00:16:20.510 "cntlid": 11, 00:16:20.510 "qid": 0, 00:16:20.510 "state": "enabled", 00:16:20.510 "thread": "nvmf_tgt_poll_group_000", 00:16:20.510 "listen_address": { 00:16:20.510 "trtype": "TCP", 00:16:20.510 "adrfam": "IPv4", 00:16:20.510 "traddr": "10.0.0.2", 00:16:20.510 "trsvcid": "4420" 00:16:20.510 }, 00:16:20.510 "peer_address": { 00:16:20.510 "trtype": "TCP", 00:16:20.510 "adrfam": "IPv4", 00:16:20.510 "traddr": "10.0.0.1", 00:16:20.510 "trsvcid": "59022" 00:16:20.510 }, 00:16:20.510 "auth": { 00:16:20.510 "state": "completed", 00:16:20.510 "digest": "sha256", 00:16:20.510 "dhgroup": "ffdhe2048" 00:16:20.510 } 00:16:20.510 } 00:16:20.510 ]' 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.510 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.769 21:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.336 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.594 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.851 00:16:21.851 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.851 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.851 21:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.851 { 00:16:21.851 "cntlid": 13, 00:16:21.851 "qid": 0, 00:16:21.851 "state": "enabled", 00:16:21.851 "thread": "nvmf_tgt_poll_group_000", 00:16:21.851 "listen_address": { 00:16:21.851 "trtype": "TCP", 00:16:21.851 "adrfam": "IPv4", 00:16:21.851 "traddr": "10.0.0.2", 00:16:21.851 "trsvcid": "4420" 00:16:21.851 }, 00:16:21.851 "peer_address": { 00:16:21.851 "trtype": "TCP", 00:16:21.851 "adrfam": "IPv4", 00:16:21.851 "traddr": "10.0.0.1", 00:16:21.851 "trsvcid": "59062" 00:16:21.851 }, 00:16:21.851 "auth": { 00:16:21.851 "state": "completed", 00:16:21.851 "digest": "sha256", 00:16:21.851 "dhgroup": "ffdhe2048" 00:16:21.851 } 00:16:21.851 } 00:16:21.851 ]' 00:16:21.851 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.109 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.367 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.933 21:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.933 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.192 00:16:23.192 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.192 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.192 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.450 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.450 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.450 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.450 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.450 21:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.451 { 00:16:23.451 "cntlid": 15, 00:16:23.451 "qid": 0, 00:16:23.451 "state": "enabled", 00:16:23.451 "thread": "nvmf_tgt_poll_group_000", 00:16:23.451 "listen_address": { 00:16:23.451 "trtype": "TCP", 00:16:23.451 "adrfam": "IPv4", 00:16:23.451 "traddr": "10.0.0.2", 00:16:23.451 "trsvcid": "4420" 00:16:23.451 }, 00:16:23.451 "peer_address": { 00:16:23.451 "trtype": "TCP", 00:16:23.451 "adrfam": "IPv4", 00:16:23.451 "traddr": "10.0.0.1", 00:16:23.451 "trsvcid": "35620" 00:16:23.451 }, 00:16:23.451 "auth": { 00:16:23.451 "state": "completed", 00:16:23.451 "digest": "sha256", 00:16:23.451 "dhgroup": "ffdhe2048" 00:16:23.451 } 00:16:23.451 } 00:16:23.451 ]' 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.451 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.710 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.710 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.710 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.710 21:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.277 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.536 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.807 00:16:24.807 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.807 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.807 21:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.066 { 00:16:25.066 "cntlid": 17, 00:16:25.066 "qid": 0, 00:16:25.066 "state": "enabled", 00:16:25.066 "thread": "nvmf_tgt_poll_group_000", 00:16:25.066 "listen_address": { 00:16:25.066 "trtype": "TCP", 00:16:25.066 "adrfam": "IPv4", 00:16:25.066 "traddr": "10.0.0.2", 00:16:25.066 "trsvcid": "4420" 00:16:25.066 }, 00:16:25.066 "peer_address": { 00:16:25.066 "trtype": "TCP", 00:16:25.066 "adrfam": "IPv4", 00:16:25.066 "traddr": "10.0.0.1", 00:16:25.066 "trsvcid": "35650" 00:16:25.066 }, 00:16:25.066 "auth": { 00:16:25.066 "state": "completed", 00:16:25.066 "digest": "sha256", 00:16:25.066 "dhgroup": "ffdhe3072" 00:16:25.066 } 00:16:25.066 } 00:16:25.066 ]' 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.066 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.325 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.893 21:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.893 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.151 00:16:26.151 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.151 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.151 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.409 { 00:16:26.409 "cntlid": 19, 00:16:26.409 "qid": 0, 00:16:26.409 "state": "enabled", 00:16:26.409 "thread": "nvmf_tgt_poll_group_000", 00:16:26.409 "listen_address": { 00:16:26.409 "trtype": "TCP", 00:16:26.409 "adrfam": "IPv4", 00:16:26.409 "traddr": "10.0.0.2", 00:16:26.409 "trsvcid": "4420" 00:16:26.409 }, 00:16:26.409 "peer_address": { 00:16:26.409 "trtype": "TCP", 00:16:26.409 "adrfam": "IPv4", 00:16:26.409 "traddr": "10.0.0.1", 00:16:26.409 "trsvcid": "35686" 00:16:26.409 }, 00:16:26.409 "auth": { 00:16:26.409 "state": "completed", 00:16:26.409 "digest": "sha256", 00:16:26.409 "dhgroup": "ffdhe3072" 00:16:26.409 } 00:16:26.409 } 00:16:26.409 ]' 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.409 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.668 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.668 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.668 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.668 21:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.235 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.496 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.800 00:16:27.800 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.800 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.800 21:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.058 { 00:16:28.058 "cntlid": 21, 00:16:28.058 "qid": 0, 00:16:28.058 "state": "enabled", 00:16:28.058 "thread": "nvmf_tgt_poll_group_000", 00:16:28.058 "listen_address": { 00:16:28.058 "trtype": "TCP", 00:16:28.058 "adrfam": "IPv4", 00:16:28.058 "traddr": "10.0.0.2", 00:16:28.058 "trsvcid": "4420" 00:16:28.058 }, 00:16:28.058 "peer_address": { 00:16:28.058 "trtype": "TCP", 00:16:28.058 "adrfam": "IPv4", 00:16:28.058 "traddr": "10.0.0.1", 00:16:28.058 "trsvcid": "35714" 00:16:28.058 }, 00:16:28.058 "auth": { 00:16:28.058 "state": "completed", 00:16:28.058 "digest": "sha256", 00:16:28.058 "dhgroup": "ffdhe3072" 00:16:28.058 } 00:16:28.058 } 00:16:28.058 ]' 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.058 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.317 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.884 21:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.884 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:28.884 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.884 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.143 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.143 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.401 { 00:16:29.401 "cntlid": 23, 00:16:29.401 "qid": 0, 00:16:29.401 "state": "enabled", 00:16:29.401 "thread": "nvmf_tgt_poll_group_000", 00:16:29.401 "listen_address": { 00:16:29.401 "trtype": "TCP", 00:16:29.401 "adrfam": "IPv4", 00:16:29.401 "traddr": "10.0.0.2", 00:16:29.401 "trsvcid": "4420" 00:16:29.401 }, 00:16:29.401 "peer_address": { 00:16:29.401 "trtype": "TCP", 00:16:29.401 "adrfam": "IPv4", 00:16:29.401 "traddr": "10.0.0.1", 00:16:29.401 "trsvcid": "35740" 00:16:29.401 }, 00:16:29.401 "auth": { 00:16:29.401 "state": "completed", 00:16:29.401 "digest": "sha256", 00:16:29.401 "dhgroup": "ffdhe3072" 00:16:29.401 } 00:16:29.401 } 00:16:29.401 ]' 00:16:29.401 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.659 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.917 21:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.483 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.741 00:16:30.741 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.741 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.741 21:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.000 { 00:16:31.000 "cntlid": 25, 00:16:31.000 "qid": 0, 00:16:31.000 "state": "enabled", 00:16:31.000 "thread": "nvmf_tgt_poll_group_000", 00:16:31.000 "listen_address": { 00:16:31.000 "trtype": "TCP", 00:16:31.000 "adrfam": "IPv4", 00:16:31.000 "traddr": "10.0.0.2", 00:16:31.000 "trsvcid": "4420" 00:16:31.000 }, 00:16:31.000 "peer_address": { 00:16:31.000 "trtype": "TCP", 00:16:31.000 "adrfam": "IPv4", 00:16:31.000 "traddr": "10.0.0.1", 00:16:31.000 "trsvcid": "35778" 00:16:31.000 }, 00:16:31.000 "auth": { 00:16:31.000 "state": "completed", 00:16:31.000 "digest": "sha256", 00:16:31.000 "dhgroup": "ffdhe4096" 00:16:31.000 } 00:16:31.000 } 00:16:31.000 ]' 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.000 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.258 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.258 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.258 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.258 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:31.824 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.824 21:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.824 21:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.824 21:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.824 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.824 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.824 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.082 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.340 00:16:32.340 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.340 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.340 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.598 { 00:16:32.598 "cntlid": 27, 00:16:32.598 "qid": 0, 00:16:32.598 "state": "enabled", 00:16:32.598 "thread": "nvmf_tgt_poll_group_000", 00:16:32.598 "listen_address": { 00:16:32.598 "trtype": "TCP", 00:16:32.598 "adrfam": "IPv4", 00:16:32.598 "traddr": "10.0.0.2", 00:16:32.598 "trsvcid": "4420" 00:16:32.598 }, 00:16:32.598 "peer_address": { 00:16:32.598 "trtype": "TCP", 00:16:32.598 "adrfam": "IPv4", 00:16:32.598 "traddr": "10.0.0.1", 00:16:32.598 "trsvcid": "35818" 00:16:32.598 }, 00:16:32.598 "auth": { 00:16:32.598 "state": "completed", 00:16:32.598 "digest": "sha256", 00:16:32.598 "dhgroup": "ffdhe4096" 00:16:32.598 } 00:16:32.598 } 00:16:32.598 ]' 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.598 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.856 21:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.422 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.680 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.938 00:16:33.938 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.938 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.938 21:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.938 { 00:16:33.938 "cntlid": 29, 00:16:33.938 "qid": 0, 00:16:33.938 "state": "enabled", 00:16:33.938 "thread": "nvmf_tgt_poll_group_000", 00:16:33.938 "listen_address": { 00:16:33.938 "trtype": "TCP", 00:16:33.938 "adrfam": "IPv4", 00:16:33.938 "traddr": "10.0.0.2", 00:16:33.938 "trsvcid": "4420" 00:16:33.938 }, 00:16:33.938 "peer_address": { 00:16:33.938 "trtype": "TCP", 00:16:33.938 "adrfam": "IPv4", 00:16:33.938 "traddr": "10.0.0.1", 00:16:33.938 "trsvcid": "48044" 00:16:33.938 }, 00:16:33.938 "auth": { 00:16:33.938 "state": "completed", 00:16:33.938 "digest": "sha256", 00:16:33.938 "dhgroup": "ffdhe4096" 00:16:33.938 } 00:16:33.938 } 00:16:33.938 ]' 00:16:33.938 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.197 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.455 21:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.022 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.280 00:16:35.280 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.280 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.280 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.538 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.538 { 00:16:35.538 "cntlid": 31, 00:16:35.538 "qid": 0, 00:16:35.538 "state": "enabled", 00:16:35.538 "thread": "nvmf_tgt_poll_group_000", 00:16:35.538 "listen_address": { 00:16:35.538 "trtype": "TCP", 00:16:35.538 "adrfam": "IPv4", 00:16:35.538 "traddr": "10.0.0.2", 00:16:35.538 "trsvcid": "4420" 00:16:35.538 }, 00:16:35.538 "peer_address": { 00:16:35.538 "trtype": "TCP", 00:16:35.538 "adrfam": "IPv4", 00:16:35.538 "traddr": "10.0.0.1", 00:16:35.538 "trsvcid": "48066" 00:16:35.538 }, 00:16:35.538 "auth": { 00:16:35.538 "state": "completed", 00:16:35.538 "digest": "sha256", 00:16:35.538 "dhgroup": "ffdhe4096" 00:16:35.538 } 00:16:35.538 } 00:16:35.538 ]' 00:16:35.539 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.539 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.539 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.539 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.539 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.797 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.797 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.797 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.797 21:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:36.363 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.364 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.622 21:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.882 00:16:36.882 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.882 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.882 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.141 { 00:16:37.141 "cntlid": 33, 00:16:37.141 "qid": 0, 00:16:37.141 "state": "enabled", 00:16:37.141 "thread": "nvmf_tgt_poll_group_000", 00:16:37.141 "listen_address": { 00:16:37.141 "trtype": "TCP", 00:16:37.141 "adrfam": "IPv4", 00:16:37.141 "traddr": "10.0.0.2", 00:16:37.141 "trsvcid": "4420" 00:16:37.141 }, 00:16:37.141 "peer_address": { 00:16:37.141 "trtype": "TCP", 00:16:37.141 "adrfam": "IPv4", 00:16:37.141 "traddr": "10.0.0.1", 00:16:37.141 "trsvcid": "48084" 00:16:37.141 }, 00:16:37.141 "auth": { 00:16:37.141 "state": "completed", 00:16:37.141 "digest": "sha256", 00:16:37.141 "dhgroup": "ffdhe6144" 00:16:37.141 } 00:16:37.141 } 00:16:37.141 ]' 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.141 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.400 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.400 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.400 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.400 21:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.967 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.226 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.485 00:16:38.485 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.485 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.485 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.744 { 00:16:38.744 "cntlid": 35, 00:16:38.744 "qid": 0, 00:16:38.744 "state": "enabled", 00:16:38.744 "thread": "nvmf_tgt_poll_group_000", 00:16:38.744 "listen_address": { 00:16:38.744 "trtype": "TCP", 00:16:38.744 "adrfam": "IPv4", 00:16:38.744 "traddr": "10.0.0.2", 00:16:38.744 "trsvcid": "4420" 00:16:38.744 }, 00:16:38.744 "peer_address": { 00:16:38.744 "trtype": "TCP", 00:16:38.744 "adrfam": "IPv4", 00:16:38.744 "traddr": "10.0.0.1", 00:16:38.744 "trsvcid": "48110" 00:16:38.744 }, 00:16:38.744 "auth": { 00:16:38.744 "state": "completed", 00:16:38.744 "digest": "sha256", 00:16:38.744 "dhgroup": "ffdhe6144" 00:16:38.744 } 00:16:38.744 } 00:16:38.744 ]' 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.744 21:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.002 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.570 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.829 21:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.087 00:16:40.087 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.087 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.087 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.346 { 00:16:40.346 "cntlid": 37, 00:16:40.346 "qid": 0, 00:16:40.346 "state": "enabled", 00:16:40.346 "thread": "nvmf_tgt_poll_group_000", 00:16:40.346 "listen_address": { 00:16:40.346 "trtype": "TCP", 00:16:40.346 "adrfam": "IPv4", 00:16:40.346 "traddr": "10.0.0.2", 00:16:40.346 "trsvcid": "4420" 00:16:40.346 }, 00:16:40.346 "peer_address": { 00:16:40.346 "trtype": "TCP", 00:16:40.346 "adrfam": "IPv4", 00:16:40.346 "traddr": "10.0.0.1", 00:16:40.346 "trsvcid": "48134" 00:16:40.346 }, 00:16:40.346 "auth": { 00:16:40.346 "state": "completed", 00:16:40.346 "digest": "sha256", 00:16:40.346 "dhgroup": "ffdhe6144" 00:16:40.346 } 00:16:40.346 } 00:16:40.346 ]' 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.346 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.605 21:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.190 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.448 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.705 00:16:41.705 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.705 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.705 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.999 21:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.999 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.999 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.999 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.999 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.999 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.999 { 00:16:41.999 "cntlid": 39, 00:16:41.999 "qid": 0, 00:16:41.999 "state": "enabled", 00:16:41.999 "thread": "nvmf_tgt_poll_group_000", 00:16:41.999 "listen_address": { 00:16:41.999 "trtype": "TCP", 00:16:42.000 "adrfam": "IPv4", 00:16:42.000 "traddr": "10.0.0.2", 00:16:42.000 "trsvcid": "4420" 00:16:42.000 }, 00:16:42.000 "peer_address": { 00:16:42.000 "trtype": "TCP", 00:16:42.000 "adrfam": "IPv4", 00:16:42.000 "traddr": "10.0.0.1", 00:16:42.000 "trsvcid": "48160" 00:16:42.000 }, 00:16:42.000 "auth": { 00:16:42.000 "state": "completed", 00:16:42.000 "digest": "sha256", 00:16:42.000 "dhgroup": "ffdhe6144" 00:16:42.000 } 00:16:42.000 } 00:16:42.000 ]' 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.000 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.258 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.825 21:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.082 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.340 00:16:43.340 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.340 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.340 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.599 { 00:16:43.599 "cntlid": 41, 00:16:43.599 "qid": 0, 00:16:43.599 "state": "enabled", 00:16:43.599 "thread": "nvmf_tgt_poll_group_000", 00:16:43.599 "listen_address": { 00:16:43.599 "trtype": "TCP", 00:16:43.599 "adrfam": "IPv4", 00:16:43.599 "traddr": "10.0.0.2", 00:16:43.599 "trsvcid": "4420" 00:16:43.599 }, 00:16:43.599 "peer_address": { 00:16:43.599 "trtype": "TCP", 00:16:43.599 "adrfam": "IPv4", 00:16:43.599 "traddr": "10.0.0.1", 00:16:43.599 "trsvcid": "48438" 00:16:43.599 }, 00:16:43.599 "auth": { 00:16:43.599 "state": "completed", 00:16:43.599 "digest": "sha256", 00:16:43.599 "dhgroup": "ffdhe8192" 00:16:43.599 } 00:16:43.599 } 00:16:43.599 ]' 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.599 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.857 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.857 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.857 21:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.857 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.422 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.681 21:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.247 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.247 21:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.506 { 00:16:45.506 "cntlid": 43, 00:16:45.506 "qid": 0, 00:16:45.506 "state": "enabled", 00:16:45.506 "thread": "nvmf_tgt_poll_group_000", 00:16:45.506 "listen_address": { 00:16:45.506 "trtype": "TCP", 00:16:45.506 "adrfam": "IPv4", 00:16:45.506 "traddr": "10.0.0.2", 00:16:45.506 "trsvcid": "4420" 00:16:45.506 }, 00:16:45.506 "peer_address": { 00:16:45.506 "trtype": "TCP", 00:16:45.506 "adrfam": "IPv4", 00:16:45.506 "traddr": "10.0.0.1", 00:16:45.506 "trsvcid": "48460" 00:16:45.506 }, 00:16:45.506 "auth": { 00:16:45.506 "state": "completed", 00:16:45.506 "digest": "sha256", 00:16:45.506 "dhgroup": "ffdhe8192" 00:16:45.506 } 00:16:45.506 } 00:16:45.506 ]' 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.506 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.791 21:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.359 21:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.927 00:16:46.927 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.927 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.927 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.186 { 00:16:47.186 "cntlid": 45, 00:16:47.186 "qid": 0, 00:16:47.186 "state": "enabled", 00:16:47.186 "thread": "nvmf_tgt_poll_group_000", 00:16:47.186 "listen_address": { 00:16:47.186 "trtype": "TCP", 00:16:47.186 "adrfam": "IPv4", 00:16:47.186 "traddr": "10.0.0.2", 00:16:47.186 "trsvcid": "4420" 00:16:47.186 }, 00:16:47.186 "peer_address": { 00:16:47.186 "trtype": "TCP", 00:16:47.186 "adrfam": "IPv4", 00:16:47.186 "traddr": "10.0.0.1", 00:16:47.186 "trsvcid": "48496" 00:16:47.186 }, 00:16:47.186 "auth": { 00:16:47.186 "state": "completed", 00:16:47.186 "digest": "sha256", 00:16:47.186 "dhgroup": "ffdhe8192" 00:16:47.186 } 00:16:47.186 } 00:16:47.186 ]' 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.186 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.445 21:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.013 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.271 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.529 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.788 { 00:16:48.788 "cntlid": 47, 00:16:48.788 "qid": 0, 00:16:48.788 "state": "enabled", 00:16:48.788 "thread": "nvmf_tgt_poll_group_000", 00:16:48.788 "listen_address": { 00:16:48.788 "trtype": "TCP", 00:16:48.788 "adrfam": "IPv4", 00:16:48.788 "traddr": "10.0.0.2", 00:16:48.788 "trsvcid": "4420" 00:16:48.788 }, 00:16:48.788 "peer_address": { 00:16:48.788 "trtype": "TCP", 00:16:48.788 "adrfam": "IPv4", 00:16:48.788 "traddr": "10.0.0.1", 00:16:48.788 "trsvcid": "48524" 00:16:48.788 }, 00:16:48.788 "auth": { 00:16:48.788 "state": "completed", 00:16:48.788 "digest": "sha256", 00:16:48.788 "dhgroup": "ffdhe8192" 00:16:48.788 } 00:16:48.788 } 00:16:48.788 ]' 00:16:48.788 21:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.788 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.788 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.048 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.616 21:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.874 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.132 00:16:50.132 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.132 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.132 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.396 { 00:16:50.396 "cntlid": 49, 00:16:50.396 "qid": 0, 00:16:50.396 "state": "enabled", 00:16:50.396 "thread": "nvmf_tgt_poll_group_000", 00:16:50.396 "listen_address": { 00:16:50.396 "trtype": "TCP", 00:16:50.396 "adrfam": "IPv4", 00:16:50.396 "traddr": "10.0.0.2", 00:16:50.396 "trsvcid": "4420" 00:16:50.396 }, 00:16:50.396 "peer_address": { 00:16:50.396 "trtype": "TCP", 00:16:50.396 "adrfam": "IPv4", 00:16:50.396 "traddr": "10.0.0.1", 00:16:50.396 "trsvcid": "48544" 00:16:50.396 }, 00:16:50.396 "auth": { 00:16:50.396 "state": "completed", 00:16:50.396 "digest": "sha384", 00:16:50.396 "dhgroup": "null" 00:16:50.396 } 00:16:50.396 } 00:16:50.396 ]' 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.396 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.656 21:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.225 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.484 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.484 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.743 { 00:16:51.743 "cntlid": 51, 00:16:51.743 "qid": 0, 00:16:51.743 "state": "enabled", 00:16:51.743 "thread": "nvmf_tgt_poll_group_000", 00:16:51.743 "listen_address": { 00:16:51.743 "trtype": "TCP", 00:16:51.743 "adrfam": "IPv4", 00:16:51.743 "traddr": "10.0.0.2", 00:16:51.743 "trsvcid": "4420" 00:16:51.743 }, 00:16:51.743 "peer_address": { 00:16:51.743 "trtype": "TCP", 00:16:51.743 "adrfam": "IPv4", 00:16:51.743 "traddr": "10.0.0.1", 00:16:51.743 "trsvcid": "48574" 00:16:51.743 }, 00:16:51.743 "auth": { 00:16:51.743 "state": "completed", 00:16:51.743 "digest": "sha384", 00:16:51.743 "dhgroup": "null" 00:16:51.743 } 00:16:51.743 } 00:16:51.743 ]' 00:16:51.743 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.003 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.003 21:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.003 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:52.003 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.003 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.003 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.003 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.262 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.831 21:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.831 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.091 00:16:53.091 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.091 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.091 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.350 { 00:16:53.350 "cntlid": 53, 00:16:53.350 "qid": 0, 00:16:53.350 "state": "enabled", 00:16:53.350 "thread": "nvmf_tgt_poll_group_000", 00:16:53.350 "listen_address": { 00:16:53.350 "trtype": "TCP", 00:16:53.350 "adrfam": "IPv4", 00:16:53.350 "traddr": "10.0.0.2", 00:16:53.350 "trsvcid": "4420" 00:16:53.350 }, 00:16:53.350 "peer_address": { 00:16:53.350 "trtype": "TCP", 00:16:53.350 "adrfam": "IPv4", 00:16:53.350 "traddr": "10.0.0.1", 00:16:53.350 "trsvcid": "55128" 00:16:53.350 }, 00:16:53.350 "auth": { 00:16:53.350 "state": "completed", 00:16:53.350 "digest": "sha384", 00:16:53.350 "dhgroup": "null" 00:16:53.350 } 00:16:53.350 } 00:16:53.350 ]' 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.350 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.609 21:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:16:54.177 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.177 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.177 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.177 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.436 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.694 00:16:54.694 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.694 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.694 21:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.953 { 00:16:54.953 "cntlid": 55, 00:16:54.953 "qid": 0, 00:16:54.953 "state": "enabled", 00:16:54.953 "thread": "nvmf_tgt_poll_group_000", 00:16:54.953 "listen_address": { 00:16:54.953 "trtype": "TCP", 00:16:54.953 "adrfam": "IPv4", 00:16:54.953 "traddr": "10.0.0.2", 00:16:54.953 "trsvcid": "4420" 00:16:54.953 }, 00:16:54.953 "peer_address": { 00:16:54.953 "trtype": "TCP", 00:16:54.953 "adrfam": "IPv4", 00:16:54.953 "traddr": "10.0.0.1", 00:16:54.953 "trsvcid": "55152" 00:16:54.953 }, 00:16:54.953 "auth": { 00:16:54.953 "state": "completed", 00:16:54.953 "digest": "sha384", 00:16:54.953 "dhgroup": "null" 00:16:54.953 } 00:16:54.953 } 00:16:54.953 ]' 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.953 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.212 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.780 21:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.058 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.364 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.364 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.364 { 00:16:56.364 "cntlid": 57, 00:16:56.364 "qid": 0, 00:16:56.364 "state": "enabled", 00:16:56.364 "thread": "nvmf_tgt_poll_group_000", 00:16:56.364 "listen_address": { 00:16:56.364 "trtype": "TCP", 00:16:56.364 "adrfam": "IPv4", 00:16:56.364 "traddr": "10.0.0.2", 00:16:56.364 "trsvcid": "4420" 00:16:56.364 }, 00:16:56.365 "peer_address": { 00:16:56.365 "trtype": "TCP", 00:16:56.365 "adrfam": "IPv4", 00:16:56.365 "traddr": "10.0.0.1", 00:16:56.365 "trsvcid": "55188" 00:16:56.365 }, 00:16:56.365 "auth": { 00:16:56.365 "state": "completed", 00:16:56.365 "digest": "sha384", 00:16:56.365 "dhgroup": "ffdhe2048" 00:16:56.365 } 00:16:56.365 } 00:16:56.365 ]' 00:16:56.365 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.365 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.365 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.365 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.365 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.623 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.623 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.623 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.623 21:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.191 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.450 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.709 00:16:57.709 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.709 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.709 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.968 { 00:16:57.968 "cntlid": 59, 00:16:57.968 "qid": 0, 00:16:57.968 "state": "enabled", 00:16:57.968 "thread": "nvmf_tgt_poll_group_000", 00:16:57.968 "listen_address": { 00:16:57.968 "trtype": "TCP", 00:16:57.968 "adrfam": "IPv4", 00:16:57.968 "traddr": "10.0.0.2", 00:16:57.968 "trsvcid": "4420" 00:16:57.968 }, 00:16:57.968 "peer_address": { 00:16:57.968 "trtype": "TCP", 00:16:57.968 "adrfam": "IPv4", 00:16:57.968 "traddr": "10.0.0.1", 00:16:57.968 "trsvcid": "55206" 00:16:57.968 }, 00:16:57.968 "auth": { 00:16:57.968 "state": "completed", 00:16:57.968 "digest": "sha384", 00:16:57.968 "dhgroup": "ffdhe2048" 00:16:57.968 } 00:16:57.968 } 00:16:57.968 ]' 00:16:57.968 21:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.968 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.226 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.791 21:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.791 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.049 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.049 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.306 { 00:16:59.306 "cntlid": 61, 00:16:59.306 "qid": 0, 00:16:59.306 "state": "enabled", 00:16:59.306 "thread": "nvmf_tgt_poll_group_000", 00:16:59.306 "listen_address": { 00:16:59.306 "trtype": "TCP", 00:16:59.306 "adrfam": "IPv4", 00:16:59.306 "traddr": "10.0.0.2", 00:16:59.306 "trsvcid": "4420" 00:16:59.306 }, 00:16:59.306 "peer_address": { 00:16:59.306 "trtype": "TCP", 00:16:59.306 "adrfam": "IPv4", 00:16:59.306 "traddr": "10.0.0.1", 00:16:59.306 "trsvcid": "55230" 00:16:59.306 }, 00:16:59.306 "auth": { 00:16:59.306 "state": "completed", 00:16:59.306 "digest": "sha384", 00:16:59.306 "dhgroup": "ffdhe2048" 00:16:59.306 } 00:16:59.306 } 00:16:59.306 ]' 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.306 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.564 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.564 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.564 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.564 21:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.271 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.529 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.529 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.788 { 00:17:00.788 "cntlid": 63, 00:17:00.788 "qid": 0, 00:17:00.788 "state": "enabled", 00:17:00.788 "thread": "nvmf_tgt_poll_group_000", 00:17:00.788 "listen_address": { 00:17:00.788 "trtype": "TCP", 00:17:00.788 "adrfam": "IPv4", 00:17:00.788 "traddr": "10.0.0.2", 00:17:00.788 "trsvcid": "4420" 00:17:00.788 }, 00:17:00.788 "peer_address": { 00:17:00.788 "trtype": "TCP", 00:17:00.788 "adrfam": "IPv4", 00:17:00.788 "traddr": "10.0.0.1", 00:17:00.788 "trsvcid": "55258" 00:17:00.788 }, 00:17:00.788 "auth": { 00:17:00.788 "state": "completed", 00:17:00.788 "digest": "sha384", 00:17:00.788 "dhgroup": "ffdhe2048" 00:17:00.788 } 00:17:00.788 } 00:17:00.788 ]' 00:17:00.788 21:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.788 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.788 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.046 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.612 21:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.871 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.130 00:17:02.130 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.130 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.130 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.389 { 00:17:02.389 "cntlid": 65, 00:17:02.389 "qid": 0, 00:17:02.389 "state": "enabled", 00:17:02.389 "thread": "nvmf_tgt_poll_group_000", 00:17:02.389 "listen_address": { 00:17:02.389 "trtype": "TCP", 00:17:02.389 "adrfam": "IPv4", 00:17:02.389 "traddr": "10.0.0.2", 00:17:02.389 "trsvcid": "4420" 00:17:02.389 }, 00:17:02.389 "peer_address": { 00:17:02.389 "trtype": "TCP", 00:17:02.389 "adrfam": "IPv4", 00:17:02.389 "traddr": "10.0.0.1", 00:17:02.389 "trsvcid": "55292" 00:17:02.389 }, 00:17:02.389 "auth": { 00:17:02.389 "state": "completed", 00:17:02.389 "digest": "sha384", 00:17:02.389 "dhgroup": "ffdhe3072" 00:17:02.389 } 00:17:02.389 } 00:17:02.389 ]' 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.389 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.666 21:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.234 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.235 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.494 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.752 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.752 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.752 { 00:17:03.752 "cntlid": 67, 00:17:03.752 "qid": 0, 00:17:03.752 "state": "enabled", 00:17:03.752 "thread": "nvmf_tgt_poll_group_000", 00:17:03.752 "listen_address": { 00:17:03.752 "trtype": "TCP", 00:17:03.752 "adrfam": "IPv4", 00:17:03.752 "traddr": "10.0.0.2", 00:17:03.752 "trsvcid": "4420" 00:17:03.752 }, 00:17:03.752 "peer_address": { 00:17:03.752 "trtype": "TCP", 00:17:03.752 "adrfam": "IPv4", 00:17:03.752 "traddr": "10.0.0.1", 00:17:03.753 "trsvcid": "37558" 00:17:03.753 }, 00:17:03.753 "auth": { 00:17:03.753 "state": "completed", 00:17:03.753 "digest": "sha384", 00:17:03.753 "dhgroup": "ffdhe3072" 00:17:03.753 } 00:17:03.753 } 00:17:03.753 ]' 00:17:03.753 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.753 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.753 21:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.011 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:04.577 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.577 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.577 21:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.577 21:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.837 21:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.837 21:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.837 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.837 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.095 00:17:05.095 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.095 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.095 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.354 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.354 { 00:17:05.354 "cntlid": 69, 00:17:05.354 "qid": 0, 00:17:05.354 "state": "enabled", 00:17:05.354 "thread": "nvmf_tgt_poll_group_000", 00:17:05.354 "listen_address": { 00:17:05.354 "trtype": "TCP", 00:17:05.354 "adrfam": "IPv4", 00:17:05.354 "traddr": "10.0.0.2", 00:17:05.354 "trsvcid": "4420" 00:17:05.354 }, 00:17:05.354 "peer_address": { 00:17:05.354 "trtype": "TCP", 00:17:05.355 "adrfam": "IPv4", 00:17:05.355 "traddr": "10.0.0.1", 00:17:05.355 "trsvcid": "37570" 00:17:05.355 }, 00:17:05.355 "auth": { 00:17:05.355 "state": "completed", 00:17:05.355 "digest": "sha384", 00:17:05.355 "dhgroup": "ffdhe3072" 00:17:05.355 } 00:17:05.355 } 00:17:05.355 ]' 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.355 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.614 21:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.181 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.440 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.699 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.699 { 00:17:06.699 "cntlid": 71, 00:17:06.699 "qid": 0, 00:17:06.699 "state": "enabled", 00:17:06.699 "thread": "nvmf_tgt_poll_group_000", 00:17:06.699 "listen_address": { 00:17:06.699 "trtype": "TCP", 00:17:06.699 "adrfam": "IPv4", 00:17:06.699 "traddr": "10.0.0.2", 00:17:06.699 "trsvcid": "4420" 00:17:06.699 }, 00:17:06.699 "peer_address": { 00:17:06.699 "trtype": "TCP", 00:17:06.699 "adrfam": "IPv4", 00:17:06.699 "traddr": "10.0.0.1", 00:17:06.699 "trsvcid": "37594" 00:17:06.699 }, 00:17:06.699 "auth": { 00:17:06.699 "state": "completed", 00:17:06.699 "digest": "sha384", 00:17:06.699 "dhgroup": "ffdhe3072" 00:17:06.699 } 00:17:06.699 } 00:17:06.699 ]' 00:17:06.699 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.958 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.958 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.958 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.958 21:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.958 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.958 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.958 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.217 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.787 21:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.045 00:17:08.045 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.045 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.045 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.303 { 00:17:08.303 "cntlid": 73, 00:17:08.303 "qid": 0, 00:17:08.303 "state": "enabled", 00:17:08.303 "thread": "nvmf_tgt_poll_group_000", 00:17:08.303 "listen_address": { 00:17:08.303 "trtype": "TCP", 00:17:08.303 "adrfam": "IPv4", 00:17:08.303 "traddr": "10.0.0.2", 00:17:08.303 "trsvcid": "4420" 00:17:08.303 }, 00:17:08.303 "peer_address": { 00:17:08.303 "trtype": "TCP", 00:17:08.303 "adrfam": "IPv4", 00:17:08.303 "traddr": "10.0.0.1", 00:17:08.303 "trsvcid": "37628" 00:17:08.303 }, 00:17:08.303 "auth": { 00:17:08.303 "state": "completed", 00:17:08.303 "digest": "sha384", 00:17:08.303 "dhgroup": "ffdhe4096" 00:17:08.303 } 00:17:08.303 } 00:17:08.303 ]' 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.303 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.561 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.561 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.561 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.561 21:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:09.127 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.128 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.386 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.643 00:17:09.643 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.643 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.643 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.901 { 00:17:09.901 "cntlid": 75, 00:17:09.901 "qid": 0, 00:17:09.901 "state": "enabled", 00:17:09.901 "thread": "nvmf_tgt_poll_group_000", 00:17:09.901 "listen_address": { 00:17:09.901 "trtype": "TCP", 00:17:09.901 "adrfam": "IPv4", 00:17:09.901 "traddr": "10.0.0.2", 00:17:09.901 "trsvcid": "4420" 00:17:09.901 }, 00:17:09.901 "peer_address": { 00:17:09.901 "trtype": "TCP", 00:17:09.901 "adrfam": "IPv4", 00:17:09.901 "traddr": "10.0.0.1", 00:17:09.901 "trsvcid": "37642" 00:17:09.901 }, 00:17:09.901 "auth": { 00:17:09.901 "state": "completed", 00:17:09.901 "digest": "sha384", 00:17:09.901 "dhgroup": "ffdhe4096" 00:17:09.901 } 00:17:09.901 } 00:17:09.901 ]' 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.901 21:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.902 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.902 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.902 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.902 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.902 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.159 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.751 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.010 21:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.010 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.010 21:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.010 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.268 { 00:17:11.268 "cntlid": 77, 00:17:11.268 "qid": 0, 00:17:11.268 "state": "enabled", 00:17:11.268 "thread": "nvmf_tgt_poll_group_000", 00:17:11.268 "listen_address": { 00:17:11.268 "trtype": "TCP", 00:17:11.268 "adrfam": "IPv4", 00:17:11.268 "traddr": "10.0.0.2", 00:17:11.268 "trsvcid": "4420" 00:17:11.268 }, 00:17:11.268 "peer_address": { 00:17:11.268 "trtype": "TCP", 00:17:11.268 "adrfam": "IPv4", 00:17:11.268 "traddr": "10.0.0.1", 00:17:11.268 "trsvcid": "37668" 00:17:11.268 }, 00:17:11.268 "auth": { 00:17:11.268 "state": "completed", 00:17:11.268 "digest": "sha384", 00:17:11.268 "dhgroup": "ffdhe4096" 00:17:11.268 } 00:17:11.268 } 00:17:11.268 ]' 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.268 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.527 21:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:12.094 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.095 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.355 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.614 00:17:12.614 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.614 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.614 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.872 { 00:17:12.872 "cntlid": 79, 00:17:12.872 "qid": 0, 00:17:12.872 "state": "enabled", 00:17:12.872 "thread": "nvmf_tgt_poll_group_000", 00:17:12.872 "listen_address": { 00:17:12.872 "trtype": "TCP", 00:17:12.872 "adrfam": "IPv4", 00:17:12.872 "traddr": "10.0.0.2", 00:17:12.872 "trsvcid": "4420" 00:17:12.872 }, 00:17:12.872 "peer_address": { 00:17:12.872 "trtype": "TCP", 00:17:12.872 "adrfam": "IPv4", 00:17:12.872 "traddr": "10.0.0.1", 00:17:12.872 "trsvcid": "37690" 00:17:12.872 }, 00:17:12.872 "auth": { 00:17:12.872 "state": "completed", 00:17:12.872 "digest": "sha384", 00:17:12.872 "dhgroup": "ffdhe4096" 00:17:12.872 } 00:17:12.872 } 00:17:12.872 ]' 00:17:12.872 21:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.872 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.130 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.696 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.955 21:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.214 00:17:14.214 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.214 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.214 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.473 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.473 { 00:17:14.473 "cntlid": 81, 00:17:14.473 "qid": 0, 00:17:14.473 "state": "enabled", 00:17:14.473 "thread": "nvmf_tgt_poll_group_000", 00:17:14.473 "listen_address": { 00:17:14.474 "trtype": "TCP", 00:17:14.474 "adrfam": "IPv4", 00:17:14.474 "traddr": "10.0.0.2", 00:17:14.474 "trsvcid": "4420" 00:17:14.474 }, 00:17:14.474 "peer_address": { 00:17:14.474 "trtype": "TCP", 00:17:14.474 "adrfam": "IPv4", 00:17:14.474 "traddr": "10.0.0.1", 00:17:14.474 "trsvcid": "42102" 00:17:14.474 }, 00:17:14.474 "auth": { 00:17:14.474 "state": "completed", 00:17:14.474 "digest": "sha384", 00:17:14.474 "dhgroup": "ffdhe6144" 00:17:14.474 } 00:17:14.474 } 00:17:14.474 ]' 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.474 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.734 21:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.301 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.559 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.818 00:17:15.818 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.818 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.818 21:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.076 { 00:17:16.076 "cntlid": 83, 00:17:16.076 "qid": 0, 00:17:16.076 "state": "enabled", 00:17:16.076 "thread": "nvmf_tgt_poll_group_000", 00:17:16.076 "listen_address": { 00:17:16.076 "trtype": "TCP", 00:17:16.076 "adrfam": "IPv4", 00:17:16.076 "traddr": "10.0.0.2", 00:17:16.076 "trsvcid": "4420" 00:17:16.076 }, 00:17:16.076 "peer_address": { 00:17:16.076 "trtype": "TCP", 00:17:16.076 "adrfam": "IPv4", 00:17:16.076 "traddr": "10.0.0.1", 00:17:16.076 "trsvcid": "42140" 00:17:16.076 }, 00:17:16.076 "auth": { 00:17:16.076 "state": "completed", 00:17:16.076 "digest": "sha384", 00:17:16.076 "dhgroup": "ffdhe6144" 00:17:16.076 } 00:17:16.076 } 00:17:16.076 ]' 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.076 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.077 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.077 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.077 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.077 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.336 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:16.903 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.903 21:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.903 21:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.903 21:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.903 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.903 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.903 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.903 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.162 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.421 00:17:17.421 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.421 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.421 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.679 { 00:17:17.679 "cntlid": 85, 00:17:17.679 "qid": 0, 00:17:17.679 "state": "enabled", 00:17:17.679 "thread": "nvmf_tgt_poll_group_000", 00:17:17.679 "listen_address": { 00:17:17.679 "trtype": "TCP", 00:17:17.679 "adrfam": "IPv4", 00:17:17.679 "traddr": "10.0.0.2", 00:17:17.679 "trsvcid": "4420" 00:17:17.679 }, 00:17:17.679 "peer_address": { 00:17:17.679 "trtype": "TCP", 00:17:17.679 "adrfam": "IPv4", 00:17:17.679 "traddr": "10.0.0.1", 00:17:17.679 "trsvcid": "42166" 00:17:17.679 }, 00:17:17.679 "auth": { 00:17:17.679 "state": "completed", 00:17:17.679 "digest": "sha384", 00:17:17.679 "dhgroup": "ffdhe6144" 00:17:17.679 } 00:17:17.679 } 00:17:17.679 ]' 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.679 21:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.938 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.512 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.770 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:18.771 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.771 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.771 21:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.771 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.771 21:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.029 00:17:19.029 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.029 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.029 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.286 { 00:17:19.286 "cntlid": 87, 00:17:19.286 "qid": 0, 00:17:19.286 "state": "enabled", 00:17:19.286 "thread": "nvmf_tgt_poll_group_000", 00:17:19.286 "listen_address": { 00:17:19.286 "trtype": "TCP", 00:17:19.286 "adrfam": "IPv4", 00:17:19.286 "traddr": "10.0.0.2", 00:17:19.286 "trsvcid": "4420" 00:17:19.286 }, 00:17:19.286 "peer_address": { 00:17:19.286 "trtype": "TCP", 00:17:19.286 "adrfam": "IPv4", 00:17:19.286 "traddr": "10.0.0.1", 00:17:19.286 "trsvcid": "42202" 00:17:19.286 }, 00:17:19.286 "auth": { 00:17:19.286 "state": "completed", 00:17:19.286 "digest": "sha384", 00:17:19.286 "dhgroup": "ffdhe6144" 00:17:19.286 } 00:17:19.286 } 00:17:19.286 ]' 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.286 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.544 21:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.133 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.390 21:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.390 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.390 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.648 00:17:20.648 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.648 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.648 21:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.906 { 00:17:20.906 "cntlid": 89, 00:17:20.906 "qid": 0, 00:17:20.906 "state": "enabled", 00:17:20.906 "thread": "nvmf_tgt_poll_group_000", 00:17:20.906 "listen_address": { 00:17:20.906 "trtype": "TCP", 00:17:20.906 "adrfam": "IPv4", 00:17:20.906 "traddr": "10.0.0.2", 00:17:20.906 "trsvcid": "4420" 00:17:20.906 }, 00:17:20.906 "peer_address": { 00:17:20.906 "trtype": "TCP", 00:17:20.906 "adrfam": "IPv4", 00:17:20.906 "traddr": "10.0.0.1", 00:17:20.906 "trsvcid": "42228" 00:17:20.906 }, 00:17:20.906 "auth": { 00:17:20.906 "state": "completed", 00:17:20.906 "digest": "sha384", 00:17:20.906 "dhgroup": "ffdhe8192" 00:17:20.906 } 00:17:20.906 } 00:17:20.906 ]' 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.906 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.163 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.163 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.163 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.163 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.728 21:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.986 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.987 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.987 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.987 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.554 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.554 { 00:17:22.554 "cntlid": 91, 00:17:22.554 "qid": 0, 00:17:22.554 "state": "enabled", 00:17:22.554 "thread": "nvmf_tgt_poll_group_000", 00:17:22.554 "listen_address": { 00:17:22.554 "trtype": "TCP", 00:17:22.554 "adrfam": "IPv4", 00:17:22.554 "traddr": "10.0.0.2", 00:17:22.554 "trsvcid": "4420" 00:17:22.554 }, 00:17:22.554 "peer_address": { 00:17:22.554 "trtype": "TCP", 00:17:22.554 "adrfam": "IPv4", 00:17:22.554 "traddr": "10.0.0.1", 00:17:22.554 "trsvcid": "42244" 00:17:22.554 }, 00:17:22.554 "auth": { 00:17:22.554 "state": "completed", 00:17:22.554 "digest": "sha384", 00:17:22.554 "dhgroup": "ffdhe8192" 00:17:22.554 } 00:17:22.554 } 00:17:22.554 ]' 00:17:22.554 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.813 21:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.072 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.640 21:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.208 00:17:24.208 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.208 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.208 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.467 { 00:17:24.467 "cntlid": 93, 00:17:24.467 "qid": 0, 00:17:24.467 "state": "enabled", 00:17:24.467 "thread": "nvmf_tgt_poll_group_000", 00:17:24.467 "listen_address": { 00:17:24.467 "trtype": "TCP", 00:17:24.467 "adrfam": "IPv4", 00:17:24.467 "traddr": "10.0.0.2", 00:17:24.467 "trsvcid": "4420" 00:17:24.467 }, 00:17:24.467 "peer_address": { 00:17:24.467 "trtype": "TCP", 00:17:24.467 "adrfam": "IPv4", 00:17:24.467 "traddr": "10.0.0.1", 00:17:24.467 "trsvcid": "38284" 00:17:24.467 }, 00:17:24.467 "auth": { 00:17:24.467 "state": "completed", 00:17:24.467 "digest": "sha384", 00:17:24.467 "dhgroup": "ffdhe8192" 00:17:24.467 } 00:17:24.467 } 00:17:24.467 ]' 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.467 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.725 21:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.307 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.308 21:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.875 00:17:25.875 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.875 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.875 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.133 { 00:17:26.133 "cntlid": 95, 00:17:26.133 "qid": 0, 00:17:26.133 "state": "enabled", 00:17:26.133 "thread": "nvmf_tgt_poll_group_000", 00:17:26.133 "listen_address": { 00:17:26.133 "trtype": "TCP", 00:17:26.133 "adrfam": "IPv4", 00:17:26.133 "traddr": "10.0.0.2", 00:17:26.133 "trsvcid": "4420" 00:17:26.133 }, 00:17:26.133 "peer_address": { 00:17:26.133 "trtype": "TCP", 00:17:26.133 "adrfam": "IPv4", 00:17:26.133 "traddr": "10.0.0.1", 00:17:26.133 "trsvcid": "38304" 00:17:26.133 }, 00:17:26.133 "auth": { 00:17:26.133 "state": "completed", 00:17:26.133 "digest": "sha384", 00:17:26.133 "dhgroup": "ffdhe8192" 00:17:26.133 } 00:17:26.133 } 00:17:26.133 ]' 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.133 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.391 21:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:26.958 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.217 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.217 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.476 { 00:17:27.476 "cntlid": 97, 00:17:27.476 "qid": 0, 00:17:27.476 "state": "enabled", 00:17:27.476 "thread": "nvmf_tgt_poll_group_000", 00:17:27.476 "listen_address": { 00:17:27.476 "trtype": "TCP", 00:17:27.476 "adrfam": "IPv4", 00:17:27.476 "traddr": "10.0.0.2", 00:17:27.476 "trsvcid": "4420" 00:17:27.476 }, 00:17:27.476 "peer_address": { 00:17:27.476 "trtype": "TCP", 00:17:27.476 "adrfam": "IPv4", 00:17:27.476 "traddr": "10.0.0.1", 00:17:27.476 "trsvcid": "38330" 00:17:27.476 }, 00:17:27.476 "auth": { 00:17:27.476 "state": "completed", 00:17:27.476 "digest": "sha512", 00:17:27.476 "dhgroup": "null" 00:17:27.476 } 00:17:27.476 } 00:17:27.476 ]' 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.476 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.735 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:27.735 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.735 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.735 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.735 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.736 21:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.315 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.574 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.832 00:17:28.832 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.832 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.832 21:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.092 { 00:17:29.092 "cntlid": 99, 00:17:29.092 "qid": 0, 00:17:29.092 "state": "enabled", 00:17:29.092 "thread": "nvmf_tgt_poll_group_000", 00:17:29.092 "listen_address": { 00:17:29.092 "trtype": "TCP", 00:17:29.092 "adrfam": "IPv4", 00:17:29.092 "traddr": "10.0.0.2", 00:17:29.092 "trsvcid": "4420" 00:17:29.092 }, 00:17:29.092 "peer_address": { 00:17:29.092 "trtype": "TCP", 00:17:29.092 "adrfam": "IPv4", 00:17:29.092 "traddr": "10.0.0.1", 00:17:29.092 "trsvcid": "38346" 00:17:29.092 }, 00:17:29.092 "auth": { 00:17:29.092 "state": "completed", 00:17:29.092 "digest": "sha512", 00:17:29.092 "dhgroup": "null" 00:17:29.092 } 00:17:29.092 } 00:17:29.092 ]' 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.092 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.351 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:29.917 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.917 21:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.917 21:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.917 21:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.917 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.917 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.917 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.917 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.176 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.176 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.435 { 00:17:30.435 "cntlid": 101, 00:17:30.435 "qid": 0, 00:17:30.435 "state": "enabled", 00:17:30.435 "thread": "nvmf_tgt_poll_group_000", 00:17:30.435 "listen_address": { 00:17:30.435 "trtype": "TCP", 00:17:30.435 "adrfam": "IPv4", 00:17:30.435 "traddr": "10.0.0.2", 00:17:30.435 "trsvcid": "4420" 00:17:30.435 }, 00:17:30.435 "peer_address": { 00:17:30.435 "trtype": "TCP", 00:17:30.435 "adrfam": "IPv4", 00:17:30.435 "traddr": "10.0.0.1", 00:17:30.435 "trsvcid": "38380" 00:17:30.435 }, 00:17:30.435 "auth": { 00:17:30.435 "state": "completed", 00:17:30.435 "digest": "sha512", 00:17:30.435 "dhgroup": "null" 00:17:30.435 } 00:17:30.435 } 00:17:30.435 ]' 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.435 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.693 21:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.260 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.518 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.777 00:17:31.777 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.777 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.777 21:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.036 { 00:17:32.036 "cntlid": 103, 00:17:32.036 "qid": 0, 00:17:32.036 "state": "enabled", 00:17:32.036 "thread": "nvmf_tgt_poll_group_000", 00:17:32.036 "listen_address": { 00:17:32.036 "trtype": "TCP", 00:17:32.036 "adrfam": "IPv4", 00:17:32.036 "traddr": "10.0.0.2", 00:17:32.036 "trsvcid": "4420" 00:17:32.036 }, 00:17:32.036 "peer_address": { 00:17:32.036 "trtype": "TCP", 00:17:32.036 "adrfam": "IPv4", 00:17:32.036 "traddr": "10.0.0.1", 00:17:32.036 "trsvcid": "38406" 00:17:32.036 }, 00:17:32.036 "auth": { 00:17:32.036 "state": "completed", 00:17:32.036 "digest": "sha512", 00:17:32.036 "dhgroup": "null" 00:17:32.036 } 00:17:32.036 } 00:17:32.036 ]' 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.036 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.294 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.861 21:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.119 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.377 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.377 { 00:17:33.377 "cntlid": 105, 00:17:33.377 "qid": 0, 00:17:33.377 "state": "enabled", 00:17:33.377 "thread": "nvmf_tgt_poll_group_000", 00:17:33.377 "listen_address": { 00:17:33.377 "trtype": "TCP", 00:17:33.377 "adrfam": "IPv4", 00:17:33.377 "traddr": "10.0.0.2", 00:17:33.377 "trsvcid": "4420" 00:17:33.377 }, 00:17:33.377 "peer_address": { 00:17:33.377 "trtype": "TCP", 00:17:33.377 "adrfam": "IPv4", 00:17:33.377 "traddr": "10.0.0.1", 00:17:33.377 "trsvcid": "58218" 00:17:33.377 }, 00:17:33.377 "auth": { 00:17:33.377 "state": "completed", 00:17:33.377 "digest": "sha512", 00:17:33.377 "dhgroup": "ffdhe2048" 00:17:33.377 } 00:17:33.377 } 00:17:33.377 ]' 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.377 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.635 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.635 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.635 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.635 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.635 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.893 21:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.459 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.717 00:17:34.717 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.717 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.717 21:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.975 { 00:17:34.975 "cntlid": 107, 00:17:34.975 "qid": 0, 00:17:34.975 "state": "enabled", 00:17:34.975 "thread": "nvmf_tgt_poll_group_000", 00:17:34.975 "listen_address": { 00:17:34.975 "trtype": "TCP", 00:17:34.975 "adrfam": "IPv4", 00:17:34.975 "traddr": "10.0.0.2", 00:17:34.975 "trsvcid": "4420" 00:17:34.975 }, 00:17:34.975 "peer_address": { 00:17:34.975 "trtype": "TCP", 00:17:34.975 "adrfam": "IPv4", 00:17:34.975 "traddr": "10.0.0.1", 00:17:34.975 "trsvcid": "58234" 00:17:34.975 }, 00:17:34.975 "auth": { 00:17:34.975 "state": "completed", 00:17:34.975 "digest": "sha512", 00:17:34.975 "dhgroup": "ffdhe2048" 00:17:34.975 } 00:17:34.975 } 00:17:34.975 ]' 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.975 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.233 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.800 21:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.059 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.317 00:17:36.317 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.317 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.317 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.576 { 00:17:36.576 "cntlid": 109, 00:17:36.576 "qid": 0, 00:17:36.576 "state": "enabled", 00:17:36.576 "thread": "nvmf_tgt_poll_group_000", 00:17:36.576 "listen_address": { 00:17:36.576 "trtype": "TCP", 00:17:36.576 "adrfam": "IPv4", 00:17:36.576 "traddr": "10.0.0.2", 00:17:36.576 "trsvcid": "4420" 00:17:36.576 }, 00:17:36.576 "peer_address": { 00:17:36.576 "trtype": "TCP", 00:17:36.576 "adrfam": "IPv4", 00:17:36.576 "traddr": "10.0.0.1", 00:17:36.576 "trsvcid": "58248" 00:17:36.576 }, 00:17:36.576 "auth": { 00:17:36.576 "state": "completed", 00:17:36.576 "digest": "sha512", 00:17:36.576 "dhgroup": "ffdhe2048" 00:17:36.576 } 00:17:36.576 } 00:17:36.576 ]' 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.576 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.835 21:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.403 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.662 21:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.662 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.662 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.662 00:17:37.921 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.921 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.921 21:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.921 { 00:17:37.921 "cntlid": 111, 00:17:37.921 "qid": 0, 00:17:37.921 "state": "enabled", 00:17:37.921 "thread": "nvmf_tgt_poll_group_000", 00:17:37.921 "listen_address": { 00:17:37.921 "trtype": "TCP", 00:17:37.921 "adrfam": "IPv4", 00:17:37.921 "traddr": "10.0.0.2", 00:17:37.921 "trsvcid": "4420" 00:17:37.921 }, 00:17:37.921 "peer_address": { 00:17:37.921 "trtype": "TCP", 00:17:37.921 "adrfam": "IPv4", 00:17:37.921 "traddr": "10.0.0.1", 00:17:37.921 "trsvcid": "58268" 00:17:37.921 }, 00:17:37.921 "auth": { 00:17:37.921 "state": "completed", 00:17:37.921 "digest": "sha512", 00:17:37.921 "dhgroup": "ffdhe2048" 00:17:37.921 } 00:17:37.921 } 00:17:37.921 ]' 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.921 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.180 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.747 21:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.287 00:17:39.287 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.287 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.287 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.544 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.544 { 00:17:39.544 "cntlid": 113, 00:17:39.544 "qid": 0, 00:17:39.544 "state": "enabled", 00:17:39.544 "thread": "nvmf_tgt_poll_group_000", 00:17:39.544 "listen_address": { 00:17:39.544 "trtype": "TCP", 00:17:39.544 "adrfam": "IPv4", 00:17:39.545 "traddr": "10.0.0.2", 00:17:39.545 "trsvcid": "4420" 00:17:39.545 }, 00:17:39.545 "peer_address": { 00:17:39.545 "trtype": "TCP", 00:17:39.545 "adrfam": "IPv4", 00:17:39.545 "traddr": "10.0.0.1", 00:17:39.545 "trsvcid": "58302" 00:17:39.545 }, 00:17:39.545 "auth": { 00:17:39.545 "state": "completed", 00:17:39.545 "digest": "sha512", 00:17:39.545 "dhgroup": "ffdhe3072" 00:17:39.545 } 00:17:39.545 } 00:17:39.545 ]' 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.545 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.803 21:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.370 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.628 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.628 00:17:40.886 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.886 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.886 21:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.886 { 00:17:40.886 "cntlid": 115, 00:17:40.886 "qid": 0, 00:17:40.886 "state": "enabled", 00:17:40.886 "thread": "nvmf_tgt_poll_group_000", 00:17:40.886 "listen_address": { 00:17:40.886 "trtype": "TCP", 00:17:40.886 "adrfam": "IPv4", 00:17:40.886 "traddr": "10.0.0.2", 00:17:40.886 "trsvcid": "4420" 00:17:40.886 }, 00:17:40.886 "peer_address": { 00:17:40.886 "trtype": "TCP", 00:17:40.886 "adrfam": "IPv4", 00:17:40.886 "traddr": "10.0.0.1", 00:17:40.886 "trsvcid": "58326" 00:17:40.886 }, 00:17:40.886 "auth": { 00:17:40.886 "state": "completed", 00:17:40.886 "digest": "sha512", 00:17:40.886 "dhgroup": "ffdhe3072" 00:17:40.886 } 00:17:40.886 } 00:17:40.886 ]' 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.886 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.143 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.707 21:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.965 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.224 00:17:42.224 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.224 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.224 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.481 { 00:17:42.481 "cntlid": 117, 00:17:42.481 "qid": 0, 00:17:42.481 "state": "enabled", 00:17:42.481 "thread": "nvmf_tgt_poll_group_000", 00:17:42.481 "listen_address": { 00:17:42.481 "trtype": "TCP", 00:17:42.481 "adrfam": "IPv4", 00:17:42.481 "traddr": "10.0.0.2", 00:17:42.481 "trsvcid": "4420" 00:17:42.481 }, 00:17:42.481 "peer_address": { 00:17:42.481 "trtype": "TCP", 00:17:42.481 "adrfam": "IPv4", 00:17:42.481 "traddr": "10.0.0.1", 00:17:42.481 "trsvcid": "58358" 00:17:42.481 }, 00:17:42.481 "auth": { 00:17:42.481 "state": "completed", 00:17:42.481 "digest": "sha512", 00:17:42.481 "dhgroup": "ffdhe3072" 00:17:42.481 } 00:17:42.481 } 00:17:42.481 ]' 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.481 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.739 21:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.305 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.564 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.822 00:17:43.822 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.822 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.822 21:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.822 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.822 { 00:17:43.822 "cntlid": 119, 00:17:43.822 "qid": 0, 00:17:43.822 "state": "enabled", 00:17:43.822 "thread": "nvmf_tgt_poll_group_000", 00:17:43.822 "listen_address": { 00:17:43.823 "trtype": "TCP", 00:17:43.823 "adrfam": "IPv4", 00:17:43.823 "traddr": "10.0.0.2", 00:17:43.823 "trsvcid": "4420" 00:17:43.823 }, 00:17:43.823 "peer_address": { 00:17:43.823 "trtype": "TCP", 00:17:43.823 "adrfam": "IPv4", 00:17:43.823 "traddr": "10.0.0.1", 00:17:43.823 "trsvcid": "47436" 00:17:43.823 }, 00:17:43.823 "auth": { 00:17:43.823 "state": "completed", 00:17:43.823 "digest": "sha512", 00:17:43.823 "dhgroup": "ffdhe3072" 00:17:43.823 } 00:17:43.823 } 00:17:43.823 ]' 00:17:43.823 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.081 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.341 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.907 21:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.908 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.165 00:17:45.165 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.165 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.165 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.424 { 00:17:45.424 "cntlid": 121, 00:17:45.424 "qid": 0, 00:17:45.424 "state": "enabled", 00:17:45.424 "thread": "nvmf_tgt_poll_group_000", 00:17:45.424 "listen_address": { 00:17:45.424 "trtype": "TCP", 00:17:45.424 "adrfam": "IPv4", 00:17:45.424 "traddr": "10.0.0.2", 00:17:45.424 "trsvcid": "4420" 00:17:45.424 }, 00:17:45.424 "peer_address": { 00:17:45.424 "trtype": "TCP", 00:17:45.424 "adrfam": "IPv4", 00:17:45.424 "traddr": "10.0.0.1", 00:17:45.424 "trsvcid": "47456" 00:17:45.424 }, 00:17:45.424 "auth": { 00:17:45.424 "state": "completed", 00:17:45.424 "digest": "sha512", 00:17:45.424 "dhgroup": "ffdhe4096" 00:17:45.424 } 00:17:45.424 } 00:17:45.424 ]' 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.424 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.682 21:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.249 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.508 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.766 00:17:46.766 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.766 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.766 21:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.024 { 00:17:47.024 "cntlid": 123, 00:17:47.024 "qid": 0, 00:17:47.024 "state": "enabled", 00:17:47.024 "thread": "nvmf_tgt_poll_group_000", 00:17:47.024 "listen_address": { 00:17:47.024 "trtype": "TCP", 00:17:47.024 "adrfam": "IPv4", 00:17:47.024 "traddr": "10.0.0.2", 00:17:47.024 "trsvcid": "4420" 00:17:47.024 }, 00:17:47.024 "peer_address": { 00:17:47.024 "trtype": "TCP", 00:17:47.024 "adrfam": "IPv4", 00:17:47.024 "traddr": "10.0.0.1", 00:17:47.024 "trsvcid": "47490" 00:17:47.024 }, 00:17:47.024 "auth": { 00:17:47.024 "state": "completed", 00:17:47.024 "digest": "sha512", 00:17:47.024 "dhgroup": "ffdhe4096" 00:17:47.024 } 00:17:47.024 } 00:17:47.024 ]' 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.024 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.283 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.847 21:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.105 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.363 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.363 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.620 { 00:17:48.620 "cntlid": 125, 00:17:48.620 "qid": 0, 00:17:48.620 "state": "enabled", 00:17:48.620 "thread": "nvmf_tgt_poll_group_000", 00:17:48.620 "listen_address": { 00:17:48.620 "trtype": "TCP", 00:17:48.620 "adrfam": "IPv4", 00:17:48.620 "traddr": "10.0.0.2", 00:17:48.620 "trsvcid": "4420" 00:17:48.620 }, 00:17:48.620 "peer_address": { 00:17:48.620 "trtype": "TCP", 00:17:48.620 "adrfam": "IPv4", 00:17:48.620 "traddr": "10.0.0.1", 00:17:48.620 "trsvcid": "47518" 00:17:48.620 }, 00:17:48.620 "auth": { 00:17:48.620 "state": "completed", 00:17:48.620 "digest": "sha512", 00:17:48.620 "dhgroup": "ffdhe4096" 00:17:48.620 } 00:17:48.620 } 00:17:48.620 ]' 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.620 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.876 21:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.443 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.702 21:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.702 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.702 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.702 00:17:49.960 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.960 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.960 21:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.960 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.960 { 00:17:49.960 "cntlid": 127, 00:17:49.960 "qid": 0, 00:17:49.960 "state": "enabled", 00:17:49.960 "thread": "nvmf_tgt_poll_group_000", 00:17:49.960 "listen_address": { 00:17:49.960 "trtype": "TCP", 00:17:49.960 "adrfam": "IPv4", 00:17:49.960 "traddr": "10.0.0.2", 00:17:49.960 "trsvcid": "4420" 00:17:49.961 }, 00:17:49.961 "peer_address": { 00:17:49.961 "trtype": "TCP", 00:17:49.961 "adrfam": "IPv4", 00:17:49.961 "traddr": "10.0.0.1", 00:17:49.961 "trsvcid": "47542" 00:17:49.961 }, 00:17:49.961 "auth": { 00:17:49.961 "state": "completed", 00:17:49.961 "digest": "sha512", 00:17:49.961 "dhgroup": "ffdhe4096" 00:17:49.961 } 00:17:49.961 } 00:17:49.961 ]' 00:17:49.961 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.961 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.961 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.218 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.218 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.218 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.218 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.218 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.475 21:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.041 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.042 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.042 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.042 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.042 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.042 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.608 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.608 { 00:17:51.608 "cntlid": 129, 00:17:51.608 "qid": 0, 00:17:51.608 "state": "enabled", 00:17:51.608 "thread": "nvmf_tgt_poll_group_000", 00:17:51.608 "listen_address": { 00:17:51.608 "trtype": "TCP", 00:17:51.608 "adrfam": "IPv4", 00:17:51.608 "traddr": "10.0.0.2", 00:17:51.608 "trsvcid": "4420" 00:17:51.608 }, 00:17:51.608 "peer_address": { 00:17:51.608 "trtype": "TCP", 00:17:51.608 "adrfam": "IPv4", 00:17:51.608 "traddr": "10.0.0.1", 00:17:51.608 "trsvcid": "47576" 00:17:51.608 }, 00:17:51.608 "auth": { 00:17:51.608 "state": "completed", 00:17:51.608 "digest": "sha512", 00:17:51.608 "dhgroup": "ffdhe6144" 00:17:51.608 } 00:17:51.608 } 00:17:51.608 ]' 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.608 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.867 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.867 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.867 21:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.867 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.434 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.693 21:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.952 00:17:52.952 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.952 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.952 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.211 { 00:17:53.211 "cntlid": 131, 00:17:53.211 "qid": 0, 00:17:53.211 "state": "enabled", 00:17:53.211 "thread": "nvmf_tgt_poll_group_000", 00:17:53.211 "listen_address": { 00:17:53.211 "trtype": "TCP", 00:17:53.211 "adrfam": "IPv4", 00:17:53.211 "traddr": "10.0.0.2", 00:17:53.211 "trsvcid": "4420" 00:17:53.211 }, 00:17:53.211 "peer_address": { 00:17:53.211 "trtype": "TCP", 00:17:53.211 "adrfam": "IPv4", 00:17:53.211 "traddr": "10.0.0.1", 00:17:53.211 "trsvcid": "43626" 00:17:53.211 }, 00:17:53.211 "auth": { 00:17:53.211 "state": "completed", 00:17:53.211 "digest": "sha512", 00:17:53.211 "dhgroup": "ffdhe6144" 00:17:53.211 } 00:17:53.211 } 00:17:53.211 ]' 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.211 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.508 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.508 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.508 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.508 21:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.105 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.363 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.621 00:17:54.621 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.621 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.621 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.880 { 00:17:54.880 "cntlid": 133, 00:17:54.880 "qid": 0, 00:17:54.880 "state": "enabled", 00:17:54.880 "thread": "nvmf_tgt_poll_group_000", 00:17:54.880 "listen_address": { 00:17:54.880 "trtype": "TCP", 00:17:54.880 "adrfam": "IPv4", 00:17:54.880 "traddr": "10.0.0.2", 00:17:54.880 "trsvcid": "4420" 00:17:54.880 }, 00:17:54.880 "peer_address": { 00:17:54.880 "trtype": "TCP", 00:17:54.880 "adrfam": "IPv4", 00:17:54.880 "traddr": "10.0.0.1", 00:17:54.880 "trsvcid": "43646" 00:17:54.880 }, 00:17:54.880 "auth": { 00:17:54.880 "state": "completed", 00:17:54.880 "digest": "sha512", 00:17:54.880 "dhgroup": "ffdhe6144" 00:17:54.880 } 00:17:54.880 } 00:17:54.880 ]' 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.880 21:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.880 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.880 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.880 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.880 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.880 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.138 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.705 21:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.964 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.223 00:17:56.223 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.223 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.223 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.480 { 00:17:56.480 "cntlid": 135, 00:17:56.480 "qid": 0, 00:17:56.480 "state": "enabled", 00:17:56.480 "thread": "nvmf_tgt_poll_group_000", 00:17:56.480 "listen_address": { 00:17:56.480 "trtype": "TCP", 00:17:56.480 "adrfam": "IPv4", 00:17:56.480 "traddr": "10.0.0.2", 00:17:56.480 "trsvcid": "4420" 00:17:56.480 }, 00:17:56.480 "peer_address": { 00:17:56.480 "trtype": "TCP", 00:17:56.480 "adrfam": "IPv4", 00:17:56.480 "traddr": "10.0.0.1", 00:17:56.480 "trsvcid": "43674" 00:17:56.480 }, 00:17:56.480 "auth": { 00:17:56.480 "state": "completed", 00:17:56.480 "digest": "sha512", 00:17:56.480 "dhgroup": "ffdhe6144" 00:17:56.480 } 00:17:56.480 } 00:17:56.480 ]' 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.480 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.737 21:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.304 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.562 21:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.129 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.129 { 00:17:58.129 "cntlid": 137, 00:17:58.129 "qid": 0, 00:17:58.129 "state": "enabled", 00:17:58.129 "thread": "nvmf_tgt_poll_group_000", 00:17:58.129 "listen_address": { 00:17:58.129 "trtype": "TCP", 00:17:58.129 "adrfam": "IPv4", 00:17:58.129 "traddr": "10.0.0.2", 00:17:58.129 "trsvcid": "4420" 00:17:58.129 }, 00:17:58.129 "peer_address": { 00:17:58.129 "trtype": "TCP", 00:17:58.129 "adrfam": "IPv4", 00:17:58.129 "traddr": "10.0.0.1", 00:17:58.129 "trsvcid": "43698" 00:17:58.129 }, 00:17:58.129 "auth": { 00:17:58.129 "state": "completed", 00:17:58.129 "digest": "sha512", 00:17:58.129 "dhgroup": "ffdhe8192" 00:17:58.129 } 00:17:58.129 } 00:17:58.129 ]' 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.129 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.387 21:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.953 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.210 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.776 00:17:59.777 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.777 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.777 21:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.035 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.035 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.035 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.036 { 00:18:00.036 "cntlid": 139, 00:18:00.036 "qid": 0, 00:18:00.036 "state": "enabled", 00:18:00.036 "thread": "nvmf_tgt_poll_group_000", 00:18:00.036 "listen_address": { 00:18:00.036 "trtype": "TCP", 00:18:00.036 "adrfam": "IPv4", 00:18:00.036 "traddr": "10.0.0.2", 00:18:00.036 "trsvcid": "4420" 00:18:00.036 }, 00:18:00.036 "peer_address": { 00:18:00.036 "trtype": "TCP", 00:18:00.036 "adrfam": "IPv4", 00:18:00.036 "traddr": "10.0.0.1", 00:18:00.036 "trsvcid": "43720" 00:18:00.036 }, 00:18:00.036 "auth": { 00:18:00.036 "state": "completed", 00:18:00.036 "digest": "sha512", 00:18:00.036 "dhgroup": "ffdhe8192" 00:18:00.036 } 00:18:00.036 } 00:18:00.036 ]' 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.036 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.294 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzI5YzZjNWVlMDcyZjViZTYyZmFjOWQyNWVkZWYzZmaOOint: --dhchap-ctrl-secret DHHC-1:02:MmRlZDg5NzczOTkxODNjZjhhMGIzYjQ0MmQ2YWY4OTg4ZGM3NzI5ZmUxYWU3ZmE064tdzA==: 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.875 21:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.134 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.392 00:18:01.392 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.392 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.392 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.651 { 00:18:01.651 "cntlid": 141, 00:18:01.651 "qid": 0, 00:18:01.651 "state": "enabled", 00:18:01.651 "thread": "nvmf_tgt_poll_group_000", 00:18:01.651 "listen_address": { 00:18:01.651 "trtype": "TCP", 00:18:01.651 "adrfam": "IPv4", 00:18:01.651 "traddr": "10.0.0.2", 00:18:01.651 "trsvcid": "4420" 00:18:01.651 }, 00:18:01.651 "peer_address": { 00:18:01.651 "trtype": "TCP", 00:18:01.651 "adrfam": "IPv4", 00:18:01.651 "traddr": "10.0.0.1", 00:18:01.651 "trsvcid": "43736" 00:18:01.651 }, 00:18:01.651 "auth": { 00:18:01.651 "state": "completed", 00:18:01.651 "digest": "sha512", 00:18:01.651 "dhgroup": "ffdhe8192" 00:18:01.651 } 00:18:01.651 } 00:18:01.651 ]' 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.651 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.910 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.910 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.910 21:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.910 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NzY3OGQ0MWRhYjhiZDQ0MDYyYTFlYjYxNDBmMTAzY2UxNjE2MGFjZDExZjc0MWNi5Ccw7g==: --dhchap-ctrl-secret DHHC-1:01:NjQxNzBiMGRkNmU5NDFmMTBjZWYwMjRmNTAwNGQ2NzZN3Lhw: 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.477 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.735 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:02.735 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.735 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.735 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.736 21:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.303 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.303 { 00:18:03.303 "cntlid": 143, 00:18:03.303 "qid": 0, 00:18:03.303 "state": "enabled", 00:18:03.303 "thread": "nvmf_tgt_poll_group_000", 00:18:03.303 "listen_address": { 00:18:03.303 "trtype": "TCP", 00:18:03.303 "adrfam": "IPv4", 00:18:03.303 "traddr": "10.0.0.2", 00:18:03.303 "trsvcid": "4420" 00:18:03.303 }, 00:18:03.303 "peer_address": { 00:18:03.303 "trtype": "TCP", 00:18:03.303 "adrfam": "IPv4", 00:18:03.303 "traddr": "10.0.0.1", 00:18:03.303 "trsvcid": "37350" 00:18:03.303 }, 00:18:03.303 "auth": { 00:18:03.303 "state": "completed", 00:18:03.303 "digest": "sha512", 00:18:03.303 "dhgroup": "ffdhe8192" 00:18:03.303 } 00:18:03.303 } 00:18:03.303 ]' 00:18:03.303 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.561 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.820 21:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.388 21:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.956 00:18:04.957 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.957 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.957 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.215 { 00:18:05.215 "cntlid": 145, 00:18:05.215 "qid": 0, 00:18:05.215 "state": "enabled", 00:18:05.215 "thread": "nvmf_tgt_poll_group_000", 00:18:05.215 "listen_address": { 00:18:05.215 "trtype": "TCP", 00:18:05.215 "adrfam": "IPv4", 00:18:05.215 "traddr": "10.0.0.2", 00:18:05.215 "trsvcid": "4420" 00:18:05.215 }, 00:18:05.215 "peer_address": { 00:18:05.215 "trtype": "TCP", 00:18:05.215 "adrfam": "IPv4", 00:18:05.215 "traddr": "10.0.0.1", 00:18:05.215 "trsvcid": "37366" 00:18:05.215 }, 00:18:05.215 "auth": { 00:18:05.215 "state": "completed", 00:18:05.215 "digest": "sha512", 00:18:05.215 "dhgroup": "ffdhe8192" 00:18:05.215 } 00:18:05.215 } 00:18:05.215 ]' 00:18:05.215 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.216 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.474 21:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmZkYzRkM2MxNGY2YWI3YTAzNjk4ZWUzZWJjYTc0ZDEyOTA5ZGFjMzRlZGU3NzFjBE4Utg==: --dhchap-ctrl-secret DHHC-1:03:NzdiYjA1MjE0YTMxMmFlNzM2MDVjNTc4YjA1NzUyYTEyYzYzMTliOGU2ODc0Y2JiNTRmZWM1ZGQ5MGJhZDc1ZEUdIPs=: 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.040 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.606 request: 00:18:06.606 { 00:18:06.606 "name": "nvme0", 00:18:06.606 "trtype": "tcp", 00:18:06.606 "traddr": "10.0.0.2", 00:18:06.606 "adrfam": "ipv4", 00:18:06.606 "trsvcid": "4420", 00:18:06.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:06.606 "prchk_reftag": false, 00:18:06.606 "prchk_guard": false, 00:18:06.606 "hdgst": false, 00:18:06.606 "ddgst": false, 00:18:06.606 "dhchap_key": "key2", 00:18:06.606 "method": "bdev_nvme_attach_controller", 00:18:06.606 "req_id": 1 00:18:06.606 } 00:18:06.606 Got JSON-RPC error response 00:18:06.606 response: 00:18:06.606 { 00:18:06.606 "code": -5, 00:18:06.606 "message": "Input/output error" 00:18:06.606 } 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.606 21:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.863 request: 00:18:06.863 { 00:18:06.863 "name": "nvme0", 00:18:06.863 "trtype": "tcp", 00:18:06.863 "traddr": "10.0.0.2", 00:18:06.863 "adrfam": "ipv4", 00:18:06.863 "trsvcid": "4420", 00:18:06.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:06.863 "prchk_reftag": false, 00:18:06.863 "prchk_guard": false, 00:18:06.863 "hdgst": false, 00:18:06.863 "ddgst": false, 00:18:06.863 "dhchap_key": "key1", 00:18:06.863 "dhchap_ctrlr_key": "ckey2", 00:18:06.863 "method": "bdev_nvme_attach_controller", 00:18:06.863 "req_id": 1 00:18:06.863 } 00:18:06.863 Got JSON-RPC error response 00:18:06.863 response: 00:18:06.863 { 00:18:06.863 "code": -5, 00:18:06.863 "message": "Input/output error" 00:18:06.863 } 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.863 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.127 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.386 request: 00:18:07.386 { 00:18:07.386 "name": "nvme0", 00:18:07.386 "trtype": "tcp", 00:18:07.386 "traddr": "10.0.0.2", 00:18:07.386 "adrfam": "ipv4", 00:18:07.386 "trsvcid": "4420", 00:18:07.386 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.386 "prchk_reftag": false, 00:18:07.386 "prchk_guard": false, 00:18:07.386 "hdgst": false, 00:18:07.386 "ddgst": false, 00:18:07.386 "dhchap_key": "key1", 00:18:07.386 "dhchap_ctrlr_key": "ckey1", 00:18:07.386 "method": "bdev_nvme_attach_controller", 00:18:07.386 "req_id": 1 00:18:07.386 } 00:18:07.386 Got JSON-RPC error response 00:18:07.386 response: 00:18:07.386 { 00:18:07.386 "code": -5, 00:18:07.386 "message": "Input/output error" 00:18:07.386 } 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3689740 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3689740 ']' 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3689740 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689740 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689740' 00:18:07.386 killing process with pid 3689740 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3689740 00:18:07.386 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3689740 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3710653 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3710653 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3710653 ']' 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.644 21:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3710653 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3710653 ']' 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.640 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.898 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.898 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.898 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.899 21:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.466 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.466 { 00:18:09.466 "cntlid": 1, 00:18:09.466 "qid": 0, 00:18:09.466 "state": "enabled", 00:18:09.466 "thread": "nvmf_tgt_poll_group_000", 00:18:09.466 "listen_address": { 00:18:09.466 "trtype": "TCP", 00:18:09.466 "adrfam": "IPv4", 00:18:09.466 "traddr": "10.0.0.2", 00:18:09.466 "trsvcid": "4420" 00:18:09.466 }, 00:18:09.466 "peer_address": { 00:18:09.466 "trtype": "TCP", 00:18:09.466 "adrfam": "IPv4", 00:18:09.466 "traddr": "10.0.0.1", 00:18:09.466 "trsvcid": "37424" 00:18:09.466 }, 00:18:09.466 "auth": { 00:18:09.466 "state": "completed", 00:18:09.466 "digest": "sha512", 00:18:09.466 "dhgroup": "ffdhe8192" 00:18:09.466 } 00:18:09.466 } 00:18:09.466 ]' 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.466 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.724 21:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGNlZWRlM2I2MzlkYjQ2YjgzYzYwYzRmY2MxY2VjMDdkNTY0MjNlODRiNjM5MTc4MjM2NTRjODAzMmRmMzJmNC9t+ck=: 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:10.291 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.550 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.809 request: 00:18:10.809 { 00:18:10.809 "name": "nvme0", 00:18:10.809 "trtype": "tcp", 00:18:10.809 "traddr": "10.0.0.2", 00:18:10.809 "adrfam": "ipv4", 00:18:10.809 "trsvcid": "4420", 00:18:10.809 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:10.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:10.809 "prchk_reftag": false, 00:18:10.809 "prchk_guard": false, 00:18:10.809 "hdgst": false, 00:18:10.809 "ddgst": false, 00:18:10.809 "dhchap_key": "key3", 00:18:10.809 "method": "bdev_nvme_attach_controller", 00:18:10.809 "req_id": 1 00:18:10.809 } 00:18:10.809 Got JSON-RPC error response 00:18:10.809 response: 00:18:10.809 { 00:18:10.809 "code": -5, 00:18:10.809 "message": "Input/output error" 00:18:10.809 } 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:10.809 21:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.068 request: 00:18:11.068 { 00:18:11.068 "name": "nvme0", 00:18:11.068 "trtype": "tcp", 00:18:11.068 "traddr": "10.0.0.2", 00:18:11.068 "adrfam": "ipv4", 00:18:11.068 "trsvcid": "4420", 00:18:11.068 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:11.068 "prchk_reftag": false, 00:18:11.068 "prchk_guard": false, 00:18:11.068 "hdgst": false, 00:18:11.068 "ddgst": false, 00:18:11.068 "dhchap_key": "key3", 00:18:11.068 "method": "bdev_nvme_attach_controller", 00:18:11.068 "req_id": 1 00:18:11.068 } 00:18:11.068 Got JSON-RPC error response 00:18:11.068 response: 00:18:11.068 { 00:18:11.068 "code": -5, 00:18:11.068 "message": "Input/output error" 00:18:11.068 } 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.068 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.327 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.586 request: 00:18:11.586 { 00:18:11.586 "name": "nvme0", 00:18:11.586 "trtype": "tcp", 00:18:11.586 "traddr": "10.0.0.2", 00:18:11.586 "adrfam": "ipv4", 00:18:11.586 "trsvcid": "4420", 00:18:11.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:11.586 "prchk_reftag": false, 00:18:11.586 "prchk_guard": false, 00:18:11.586 "hdgst": false, 00:18:11.586 "ddgst": false, 00:18:11.586 "dhchap_key": "key0", 00:18:11.586 "dhchap_ctrlr_key": "key1", 00:18:11.586 "method": "bdev_nvme_attach_controller", 00:18:11.586 "req_id": 1 00:18:11.586 } 00:18:11.586 Got JSON-RPC error response 00:18:11.586 response: 00:18:11.586 { 00:18:11.586 "code": -5, 00:18:11.586 "message": "Input/output error" 00:18:11.586 } 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:11.586 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:11.586 00:18:11.845 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:11.845 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:11.845 21:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.845 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.845 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.845 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3689973 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3689973 ']' 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3689973 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689973 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689973' 00:18:12.104 killing process with pid 3689973 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3689973 00:18:12.104 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3689973 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.363 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.363 rmmod nvme_tcp 00:18:12.363 rmmod nvme_fabrics 00:18:12.363 rmmod nvme_keyring 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3710653 ']' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3710653 ']' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3710653' 00:18:12.653 killing process with pid 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3710653 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.653 21:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.191 21:56:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.191 21:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wuc /tmp/spdk.key-sha256.8Cf /tmp/spdk.key-sha384.I5w /tmp/spdk.key-sha512.wEI /tmp/spdk.key-sha512.6OB /tmp/spdk.key-sha384.edi /tmp/spdk.key-sha256.h5L '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:15.191 00:18:15.191 real 2m10.438s 00:18:15.191 user 5m0.005s 00:18:15.191 sys 0m20.368s 00:18:15.191 21:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.191 21:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.191 ************************************ 00:18:15.191 END TEST nvmf_auth_target 00:18:15.191 ************************************ 00:18:15.191 21:56:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:15.191 21:56:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:15.191 21:56:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:15.191 21:56:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:15.191 21:56:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.191 21:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.191 ************************************ 00:18:15.191 START TEST nvmf_bdevio_no_huge 00:18:15.191 ************************************ 00:18:15.191 21:56:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:15.191 * Looking for test storage... 00:18:15.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.191 21:56:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:20.487 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:20.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:20.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:20.488 Found net devices under 0000:86:00.0: cvl_0_0 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:20.488 Found net devices under 0000:86:00.1: cvl_0_1 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:20.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:18:20.488 00:18:20.488 --- 10.0.0.2 ping statistics --- 00:18:20.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.488 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:18:20.488 00:18:20.488 --- 10.0.0.1 ping statistics --- 00:18:20.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.488 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.488 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3715000 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3715000 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3715000 ']' 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.748 21:56:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.748 [2024-07-15 21:56:14.779628] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:20.748 [2024-07-15 21:56:14.779672] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:20.748 [2024-07-15 21:56:14.843418] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.748 [2024-07-15 21:56:14.928842] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.748 [2024-07-15 21:56:14.928881] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.748 [2024-07-15 21:56:14.928887] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.748 [2024-07-15 21:56:14.928893] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.748 [2024-07-15 21:56:14.928898] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.748 [2024-07-15 21:56:14.929015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.748 [2024-07-15 21:56:14.929121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.748 [2024-07-15 21:56:14.929233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.748 [2024-07-15 21:56:14.929247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.684 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.685 [2024-07-15 21:56:15.629693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.685 Malloc0 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.685 [2024-07-15 21:56:15.673992] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:21.685 { 00:18:21.685 "params": { 00:18:21.685 "name": "Nvme$subsystem", 00:18:21.685 "trtype": "$TEST_TRANSPORT", 00:18:21.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.685 "adrfam": "ipv4", 00:18:21.685 "trsvcid": "$NVMF_PORT", 00:18:21.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.685 "hdgst": ${hdgst:-false}, 00:18:21.685 "ddgst": ${ddgst:-false} 00:18:21.685 }, 00:18:21.685 "method": "bdev_nvme_attach_controller" 00:18:21.685 } 00:18:21.685 EOF 00:18:21.685 )") 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:21.685 21:56:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:21.685 "params": { 00:18:21.685 "name": "Nvme1", 00:18:21.685 "trtype": "tcp", 00:18:21.685 "traddr": "10.0.0.2", 00:18:21.685 "adrfam": "ipv4", 00:18:21.685 "trsvcid": "4420", 00:18:21.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.685 "hdgst": false, 00:18:21.685 "ddgst": false 00:18:21.685 }, 00:18:21.685 "method": "bdev_nvme_attach_controller" 00:18:21.685 }' 00:18:21.685 [2024-07-15 21:56:15.723477] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:21.685 [2024-07-15 21:56:15.723522] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3715041 ] 00:18:21.685 [2024-07-15 21:56:15.782005] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.685 [2024-07-15 21:56:15.868443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.685 [2024-07-15 21:56:15.868538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.685 [2024-07-15 21:56:15.868541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.944 I/O targets: 00:18:21.944 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:21.944 00:18:21.944 00:18:21.944 CUnit - A unit testing framework for C - Version 2.1-3 00:18:21.944 http://cunit.sourceforge.net/ 00:18:21.944 00:18:21.944 00:18:21.944 Suite: bdevio tests on: Nvme1n1 00:18:21.944 Test: blockdev write read block ...passed 00:18:21.944 Test: blockdev write zeroes read block ...passed 00:18:21.944 Test: blockdev write zeroes read no split ...passed 00:18:22.204 Test: blockdev write zeroes read split ...passed 00:18:22.204 Test: blockdev write zeroes read split partial ...passed 00:18:22.204 Test: blockdev reset ...[2024-07-15 21:56:16.263650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.204 [2024-07-15 21:56:16.263707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb74f0 (9): Bad file descriptor 00:18:22.204 [2024-07-15 21:56:16.322231] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.204 passed 00:18:22.204 Test: blockdev write read 8 blocks ...passed 00:18:22.204 Test: blockdev write read size > 128k ...passed 00:18:22.204 Test: blockdev write read invalid size ...passed 00:18:22.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:22.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:22.204 Test: blockdev write read max offset ...passed 00:18:22.463 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:22.463 Test: blockdev writev readv 8 blocks ...passed 00:18:22.463 Test: blockdev writev readv 30 x 1block ...passed 00:18:22.463 Test: blockdev writev readv block ...passed 00:18:22.463 Test: blockdev writev readv size > 128k ...passed 00:18:22.463 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:22.463 Test: blockdev comparev and writev ...[2024-07-15 21:56:16.537760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.537787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.537800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.537808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.538739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.463 [2024-07-15 21:56:16.538751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:22.463 passed 00:18:22.463 Test: blockdev nvme passthru rw ...passed 00:18:22.463 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:56:16.620642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.463 [2024-07-15 21:56:16.620658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.620811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.463 [2024-07-15 21:56:16.620821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.620968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.463 [2024-07-15 21:56:16.620979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:22.463 [2024-07-15 21:56:16.621128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.464 [2024-07-15 21:56:16.621138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:22.464 passed 00:18:22.464 Test: blockdev nvme admin passthru ...passed 00:18:22.464 Test: blockdev copy ...passed 00:18:22.464 00:18:22.464 Run Summary: Type Total Ran Passed Failed Inactive 00:18:22.464 suites 1 1 n/a 0 0 00:18:22.464 tests 23 23 23 0 0 00:18:22.464 asserts 152 152 152 0 n/a 00:18:22.464 00:18:22.464 Elapsed time = 1.247 seconds 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.723 21:56:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.981 rmmod nvme_tcp 00:18:22.981 rmmod nvme_fabrics 00:18:22.981 rmmod nvme_keyring 00:18:22.981 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.981 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:22.981 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:22.981 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3715000 ']' 00:18:22.981 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3715000 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3715000 ']' 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3715000 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3715000 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3715000' 00:18:22.982 killing process with pid 3715000 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3715000 00:18:22.982 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3715000 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.240 21:56:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.778 21:56:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.778 00:18:25.778 real 0m10.458s 00:18:25.778 user 0m13.322s 00:18:25.778 sys 0m5.122s 00:18:25.778 21:56:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.778 21:56:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:25.778 ************************************ 00:18:25.778 END TEST nvmf_bdevio_no_huge 00:18:25.778 ************************************ 00:18:25.778 21:56:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.778 21:56:19 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:25.778 21:56:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.778 21:56:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.778 21:56:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.778 ************************************ 00:18:25.778 START TEST nvmf_tls 00:18:25.778 ************************************ 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:25.778 * Looking for test storage... 00:18:25.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.778 21:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.092 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:31.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:31.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:31.093 Found net devices under 0000:86:00.0: cvl_0_0 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:31.093 Found net devices under 0000:86:00.1: cvl_0_1 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:18:31.093 00:18:31.093 --- 10.0.0.2 ping statistics --- 00:18:31.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.093 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:18:31.093 00:18:31.093 --- 10.0.0.1 ping statistics --- 00:18:31.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.093 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3718783 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3718783 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3718783 ']' 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.093 21:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.093 [2024-07-15 21:56:25.034295] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:31.093 [2024-07-15 21:56:25.034337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.093 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.093 [2024-07-15 21:56:25.092346] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.093 [2024-07-15 21:56:25.170924] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.093 [2024-07-15 21:56:25.170958] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.094 [2024-07-15 21:56:25.170966] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.094 [2024-07-15 21:56:25.170972] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.094 [2024-07-15 21:56:25.170978] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.094 [2024-07-15 21:56:25.170995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:31.661 21:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:31.920 true 00:18:31.920 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:31.920 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:32.179 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:32.179 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:32.179 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:32.179 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.179 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:32.437 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:32.437 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:32.437 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.696 21:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:32.955 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:32.955 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:32.955 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:33.214 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.214 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:33.214 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:33.214 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:33.214 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:33.473 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.473 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.EuqvBQI1zY 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aprbZcxj3Z 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.EuqvBQI1zY 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aprbZcxj3Z 00:18:33.732 21:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:33.991 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:34.249 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.EuqvBQI1zY 00:18:34.249 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EuqvBQI1zY 00:18:34.249 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.249 [2024-07-15 21:56:28.395096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.249 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.508 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.508 [2024-07-15 21:56:28.731959] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.508 [2024-07-15 21:56:28.732132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.508 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.765 malloc0 00:18:34.765 21:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.023 21:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EuqvBQI1zY 00:18:35.023 [2024-07-15 21:56:29.233449] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:35.023 21:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EuqvBQI1zY 00:18:35.281 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.251 Initializing NVMe Controllers 00:18:45.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:45.251 Initialization complete. Launching workers. 00:18:45.251 ======================================================== 00:18:45.251 Latency(us) 00:18:45.251 Device Information : IOPS MiB/s Average min max 00:18:45.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16411.46 64.11 3900.13 802.21 6042.07 00:18:45.251 ======================================================== 00:18:45.251 Total : 16411.46 64.11 3900.13 802.21 6042.07 00:18:45.251 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EuqvBQI1zY 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EuqvBQI1zY' 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3721137 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3721137 /var/tmp/bdevperf.sock 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3721137 ']' 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.251 21:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.251 [2024-07-15 21:56:39.398470] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:45.251 [2024-07-15 21:56:39.398519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721137 ] 00:18:45.251 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.251 [2024-07-15 21:56:39.449025] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.509 [2024-07-15 21:56:39.527930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.078 21:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.078 21:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.078 21:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EuqvBQI1zY 00:18:46.337 [2024-07-15 21:56:40.355000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.337 [2024-07-15 21:56:40.355073] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:46.337 TLSTESTn1 00:18:46.337 21:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.337 Running I/O for 10 seconds... 00:18:58.533 00:18:58.533 Latency(us) 00:18:58.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.533 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.533 Verification LBA range: start 0x0 length 0x2000 00:18:58.533 TLSTESTn1 : 10.02 5517.44 21.55 0.00 0.00 23160.88 4786.98 45362.31 00:18:58.533 =================================================================================================================== 00:18:58.533 Total : 5517.44 21.55 0.00 0.00 23160.88 4786.98 45362.31 00:18:58.533 0 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3721137 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3721137 ']' 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3721137 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721137 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721137' 00:18:58.533 killing process with pid 3721137 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3721137 00:18:58.533 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.533 00:18:58.533 Latency(us) 00:18:58.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.533 =================================================================================================================== 00:18:58.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.533 [2024-07-15 21:56:50.636528] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3721137 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aprbZcxj3Z 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aprbZcxj3Z 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aprbZcxj3Z 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aprbZcxj3Z' 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3722974 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3722974 /var/tmp/bdevperf.sock 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3722974 ']' 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.533 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.534 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.534 21:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.534 [2024-07-15 21:56:50.866870] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:58.534 [2024-07-15 21:56:50.866917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722974 ] 00:18:58.534 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.534 [2024-07-15 21:56:50.916807] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.534 [2024-07-15 21:56:50.988274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aprbZcxj3Z 00:18:58.534 [2024-07-15 21:56:51.827096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.534 [2024-07-15 21:56:51.827175] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:58.534 [2024-07-15 21:56:51.832034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:58.534 [2024-07-15 21:56:51.832396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288680 (107): Transport endpoint is not connected 00:18:58.534 [2024-07-15 21:56:51.833386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288680 (9): Bad file descriptor 00:18:58.534 [2024-07-15 21:56:51.834387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.534 [2024-07-15 21:56:51.834397] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:58.534 [2024-07-15 21:56:51.834405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.534 request: 00:18:58.534 { 00:18:58.534 "name": "TLSTEST", 00:18:58.534 "trtype": "tcp", 00:18:58.534 "traddr": "10.0.0.2", 00:18:58.534 "adrfam": "ipv4", 00:18:58.534 "trsvcid": "4420", 00:18:58.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.534 "prchk_reftag": false, 00:18:58.534 "prchk_guard": false, 00:18:58.534 "hdgst": false, 00:18:58.534 "ddgst": false, 00:18:58.534 "psk": "/tmp/tmp.aprbZcxj3Z", 00:18:58.534 "method": "bdev_nvme_attach_controller", 00:18:58.534 "req_id": 1 00:18:58.534 } 00:18:58.534 Got JSON-RPC error response 00:18:58.534 response: 00:18:58.534 { 00:18:58.534 "code": -5, 00:18:58.534 "message": "Input/output error" 00:18:58.534 } 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3722974 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3722974 ']' 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3722974 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722974 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722974' 00:18:58.534 killing process with pid 3722974 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3722974 00:18:58.534 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.534 00:18:58.534 Latency(us) 00:18:58.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.534 =================================================================================================================== 00:18:58.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:58.534 [2024-07-15 21:56:51.896962] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:58.534 21:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3722974 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EuqvBQI1zY 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EuqvBQI1zY 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EuqvBQI1zY 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EuqvBQI1zY' 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3723209 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3723209 /var/tmp/bdevperf.sock 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3723209 ']' 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.534 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.534 [2024-07-15 21:56:52.117557] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:58.534 [2024-07-15 21:56:52.117604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723209 ] 00:18:58.534 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.534 [2024-07-15 21:56:52.167543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.534 [2024-07-15 21:56:52.238541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.793 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.793 21:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.793 21:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.EuqvBQI1zY 00:18:59.052 [2024-07-15 21:56:53.072240] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.052 [2024-07-15 21:56:53.072323] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.052 [2024-07-15 21:56:53.076866] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.052 [2024-07-15 21:56:53.076886] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.052 [2024-07-15 21:56:53.076910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.052 [2024-07-15 21:56:53.077578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x600680 (107): Transport endpoint is not connected 00:18:59.052 [2024-07-15 21:56:53.078567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x600680 (9): Bad file descriptor 00:18:59.052 [2024-07-15 21:56:53.079569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.052 [2024-07-15 21:56:53.079578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.052 [2024-07-15 21:56:53.079587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.052 request: 00:18:59.052 { 00:18:59.052 "name": "TLSTEST", 00:18:59.052 "trtype": "tcp", 00:18:59.052 "traddr": "10.0.0.2", 00:18:59.052 "adrfam": "ipv4", 00:18:59.052 "trsvcid": "4420", 00:18:59.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:59.052 "prchk_reftag": false, 00:18:59.052 "prchk_guard": false, 00:18:59.052 "hdgst": false, 00:18:59.052 "ddgst": false, 00:18:59.052 "psk": "/tmp/tmp.EuqvBQI1zY", 00:18:59.052 "method": "bdev_nvme_attach_controller", 00:18:59.052 "req_id": 1 00:18:59.052 } 00:18:59.052 Got JSON-RPC error response 00:18:59.052 response: 00:18:59.052 { 00:18:59.052 "code": -5, 00:18:59.052 "message": "Input/output error" 00:18:59.052 } 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3723209 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3723209 ']' 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3723209 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723209 00:18:59.052 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:59.053 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:59.053 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723209' 00:18:59.053 killing process with pid 3723209 00:18:59.053 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3723209 00:18:59.053 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.053 00:18:59.053 Latency(us) 00:18:59.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.053 =================================================================================================================== 00:18:59.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.053 [2024-07-15 21:56:53.141844] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.053 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3723209 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EuqvBQI1zY 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EuqvBQI1zY 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EuqvBQI1zY 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EuqvBQI1zY' 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3723443 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3723443 /var/tmp/bdevperf.sock 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3723443 ']' 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.312 21:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.312 [2024-07-15 21:56:53.365726] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:18:59.312 [2024-07-15 21:56:53.365773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723443 ] 00:18:59.312 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.312 [2024-07-15 21:56:53.416722] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.312 [2024-07-15 21:56:53.490823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EuqvBQI1zY 00:19:00.275 [2024-07-15 21:56:54.325811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.275 [2024-07-15 21:56:54.325899] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:00.275 [2024-07-15 21:56:54.330541] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.275 [2024-07-15 21:56:54.330561] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.275 [2024-07-15 21:56:54.330584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:00.275 [2024-07-15 21:56:54.331248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e680 (107): Transport endpoint is not connected 00:19:00.275 [2024-07-15 21:56:54.332238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e680 (9): Bad file descriptor 00:19:00.275 [2024-07-15 21:56:54.333239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:00.275 [2024-07-15 21:56:54.333248] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:00.275 [2024-07-15 21:56:54.333256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:00.275 request: 00:19:00.275 { 00:19:00.275 "name": "TLSTEST", 00:19:00.275 "trtype": "tcp", 00:19:00.275 "traddr": "10.0.0.2", 00:19:00.275 "adrfam": "ipv4", 00:19:00.275 "trsvcid": "4420", 00:19:00.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:00.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.275 "prchk_reftag": false, 00:19:00.275 "prchk_guard": false, 00:19:00.275 "hdgst": false, 00:19:00.275 "ddgst": false, 00:19:00.275 "psk": "/tmp/tmp.EuqvBQI1zY", 00:19:00.275 "method": "bdev_nvme_attach_controller", 00:19:00.275 "req_id": 1 00:19:00.275 } 00:19:00.275 Got JSON-RPC error response 00:19:00.275 response: 00:19:00.275 { 00:19:00.275 "code": -5, 00:19:00.275 "message": "Input/output error" 00:19:00.275 } 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3723443 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3723443 ']' 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3723443 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723443 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723443' 00:19:00.275 killing process with pid 3723443 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3723443 00:19:00.275 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.275 00:19:00.275 Latency(us) 00:19:00.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.275 =================================================================================================================== 00:19:00.275 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.275 [2024-07-15 21:56:54.379464] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.275 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3723443 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3723690 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3723690 /var/tmp/bdevperf.sock 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3723690 ']' 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.535 21:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.535 [2024-07-15 21:56:54.600410] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:00.535 [2024-07-15 21:56:54.600457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723690 ] 00:19:00.535 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.535 [2024-07-15 21:56:54.651507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.535 [2024-07-15 21:56:54.718690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:01.470 [2024-07-15 21:56:55.571486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.470 [2024-07-15 21:56:55.573154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a4c70 (9): Bad file descriptor 00:19:01.470 [2024-07-15 21:56:55.574152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.470 [2024-07-15 21:56:55.574165] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.470 [2024-07-15 21:56:55.574175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.470 request: 00:19:01.470 { 00:19:01.470 "name": "TLSTEST", 00:19:01.470 "trtype": "tcp", 00:19:01.470 "traddr": "10.0.0.2", 00:19:01.470 "adrfam": "ipv4", 00:19:01.470 "trsvcid": "4420", 00:19:01.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.470 "prchk_reftag": false, 00:19:01.470 "prchk_guard": false, 00:19:01.470 "hdgst": false, 00:19:01.470 "ddgst": false, 00:19:01.470 "method": "bdev_nvme_attach_controller", 00:19:01.470 "req_id": 1 00:19:01.470 } 00:19:01.470 Got JSON-RPC error response 00:19:01.470 response: 00:19:01.470 { 00:19:01.470 "code": -5, 00:19:01.470 "message": "Input/output error" 00:19:01.470 } 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3723690 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3723690 ']' 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3723690 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723690 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723690' 00:19:01.470 killing process with pid 3723690 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3723690 00:19:01.470 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.470 00:19:01.470 Latency(us) 00:19:01.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.470 =================================================================================================================== 00:19:01.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.470 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3723690 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3718783 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3718783 ']' 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3718783 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3718783 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3718783' 00:19:01.729 killing process with pid 3718783 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3718783 00:19:01.729 [2024-07-15 21:56:55.853805] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:01.729 21:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3718783 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.V9RK6Bs3Ls 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.V9RK6Bs3Ls 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3723937 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3723937 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3723937 ']' 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.988 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.988 [2024-07-15 21:56:56.146211] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:01.988 [2024-07-15 21:56:56.146267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.988 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.988 [2024-07-15 21:56:56.203464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.247 [2024-07-15 21:56:56.281688] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.247 [2024-07-15 21:56:56.281736] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.247 [2024-07-15 21:56:56.281742] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.247 [2024-07-15 21:56:56.281749] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.247 [2024-07-15 21:56:56.281754] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.247 [2024-07-15 21:56:56.281773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V9RK6Bs3Ls 00:19:02.815 21:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.074 [2024-07-15 21:56:57.141397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.074 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.332 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.332 [2024-07-15 21:56:57.482271] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.332 [2024-07-15 21:56:57.482450] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.332 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.590 malloc0 00:19:03.590 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:03.849 [2024-07-15 21:56:57.979692] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V9RK6Bs3Ls 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.V9RK6Bs3Ls' 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3724221 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3724221 /var/tmp/bdevperf.sock 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3724221 ']' 00:19:03.849 21:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.849 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.849 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.849 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.849 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.849 [2024-07-15 21:56:58.044023] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:03.849 [2024-07-15 21:56:58.044074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724221 ] 00:19:03.849 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.108 [2024-07-15 21:56:58.095717] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.109 [2024-07-15 21:56:58.168919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.676 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.676 21:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.676 21:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:04.935 [2024-07-15 21:56:58.995370] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.935 [2024-07-15 21:56:58.995442] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:04.935 TLSTESTn1 00:19:04.935 21:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.935 Running I/O for 10 seconds... 00:19:17.137 00:19:17.137 Latency(us) 00:19:17.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.137 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.137 Verification LBA range: start 0x0 length 0x2000 00:19:17.137 TLSTESTn1 : 10.02 4976.52 19.44 0.00 0.00 25674.67 5784.26 65194.07 00:19:17.137 =================================================================================================================== 00:19:17.137 Total : 4976.52 19.44 0.00 0.00 25674.67 5784.26 65194.07 00:19:17.137 0 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3724221 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3724221 ']' 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3724221 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724221 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724221' 00:19:17.137 killing process with pid 3724221 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3724221 00:19:17.137 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.137 00:19:17.137 Latency(us) 00:19:17.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.137 =================================================================================================================== 00:19:17.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.137 [2024-07-15 21:57:09.272495] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3724221 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.V9RK6Bs3Ls 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V9RK6Bs3Ls 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V9RK6Bs3Ls 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.V9RK6Bs3Ls 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.V9RK6Bs3Ls' 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3726593 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.137 21:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3726593 /var/tmp/bdevperf.sock 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3726593 ']' 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.138 21:57:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.138 [2024-07-15 21:57:09.508040] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:17.138 [2024-07-15 21:57:09.508090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726593 ] 00:19:17.138 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.138 [2024-07-15 21:57:09.561694] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.138 [2024-07-15 21:57:09.634744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:17.138 [2024-07-15 21:57:10.477236] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.138 [2024-07-15 21:57:10.477293] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:17.138 [2024-07-15 21:57:10.477317] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.V9RK6Bs3Ls 00:19:17.138 request: 00:19:17.138 { 00:19:17.138 "name": "TLSTEST", 00:19:17.138 "trtype": "tcp", 00:19:17.138 "traddr": "10.0.0.2", 00:19:17.138 "adrfam": "ipv4", 00:19:17.138 "trsvcid": "4420", 00:19:17.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.138 "prchk_reftag": false, 00:19:17.138 "prchk_guard": false, 00:19:17.138 "hdgst": false, 00:19:17.138 "ddgst": false, 00:19:17.138 "psk": "/tmp/tmp.V9RK6Bs3Ls", 00:19:17.138 "method": "bdev_nvme_attach_controller", 00:19:17.138 "req_id": 1 00:19:17.138 } 00:19:17.138 Got JSON-RPC error response 00:19:17.138 response: 00:19:17.138 { 00:19:17.138 "code": -1, 00:19:17.138 "message": "Operation not permitted" 00:19:17.138 } 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3726593 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3726593 ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3726593 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3726593 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3726593' 00:19:17.138 killing process with pid 3726593 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3726593 00:19:17.138 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.138 00:19:17.138 Latency(us) 00:19:17.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.138 =================================================================================================================== 00:19:17.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3726593 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3723937 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3723937 ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3723937 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723937 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723937' 00:19:17.138 killing process with pid 3723937 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3723937 00:19:17.138 [2024-07-15 21:57:10.761025] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3723937 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3727015 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3727015 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3727015 ']' 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.138 21:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.138 [2024-07-15 21:57:11.006078] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:17.138 [2024-07-15 21:57:11.006128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.138 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.138 [2024-07-15 21:57:11.061631] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.138 [2024-07-15 21:57:11.139903] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.138 [2024-07-15 21:57:11.139938] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.138 [2024-07-15 21:57:11.139945] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.138 [2024-07-15 21:57:11.139951] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.138 [2024-07-15 21:57:11.139957] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.138 [2024-07-15 21:57:11.139974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V9RK6Bs3Ls 00:19:17.707 21:57:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:17.966 [2024-07-15 21:57:11.995509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.966 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:17.966 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.224 [2024-07-15 21:57:12.336392] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.224 [2024-07-15 21:57:12.336563] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.224 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.482 malloc0 00:19:18.482 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.482 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:18.741 [2024-07-15 21:57:12.841683] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:18.741 [2024-07-15 21:57:12.841707] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:18.741 [2024-07-15 21:57:12.841735] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:18.741 request: 00:19:18.741 { 00:19:18.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.741 "host": "nqn.2016-06.io.spdk:host1", 00:19:18.741 "psk": "/tmp/tmp.V9RK6Bs3Ls", 00:19:18.741 "method": "nvmf_subsystem_add_host", 00:19:18.741 "req_id": 1 00:19:18.741 } 00:19:18.741 Got JSON-RPC error response 00:19:18.741 response: 00:19:18.741 { 00:19:18.741 "code": -32603, 00:19:18.741 "message": "Internal error" 00:19:18.741 } 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3727015 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3727015 ']' 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3727015 00:19:18.741 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727015 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727015' 00:19:18.742 killing process with pid 3727015 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3727015 00:19:18.742 21:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3727015 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.V9RK6Bs3Ls 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3727285 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3727285 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3727285 ']' 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.001 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.001 [2024-07-15 21:57:13.139179] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:19.001 [2024-07-15 21:57:13.139232] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.001 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.001 [2024-07-15 21:57:13.196740] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.260 [2024-07-15 21:57:13.274576] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.260 [2024-07-15 21:57:13.274613] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.260 [2024-07-15 21:57:13.274620] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.260 [2024-07-15 21:57:13.274626] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.260 [2024-07-15 21:57:13.274635] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.260 [2024-07-15 21:57:13.274652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V9RK6Bs3Ls 00:19:19.827 21:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.086 [2024-07-15 21:57:14.130311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.086 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.086 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.345 [2024-07-15 21:57:14.459135] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.345 [2024-07-15 21:57:14.459319] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.345 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.603 malloc0 00:19:20.603 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.603 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:20.862 [2024-07-15 21:57:14.960452] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3727659 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3727659 /var/tmp/bdevperf.sock 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3727659 ']' 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.862 21:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.862 [2024-07-15 21:57:15.023036] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:20.862 [2024-07-15 21:57:15.023083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727659 ] 00:19:20.862 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.862 [2024-07-15 21:57:15.074645] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.120 [2024-07-15 21:57:15.147523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.684 21:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.684 21:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:21.684 21:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:21.942 [2024-07-15 21:57:15.978118] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.942 [2024-07-15 21:57:15.978191] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:21.942 TLSTESTn1 00:19:21.942 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:22.201 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:22.201 "subsystems": [ 00:19:22.201 { 00:19:22.201 "subsystem": "keyring", 00:19:22.201 "config": [] 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "subsystem": "iobuf", 00:19:22.201 "config": [ 00:19:22.201 { 00:19:22.201 "method": "iobuf_set_options", 00:19:22.201 "params": { 00:19:22.201 "small_pool_count": 8192, 00:19:22.201 "large_pool_count": 1024, 00:19:22.201 "small_bufsize": 8192, 00:19:22.201 "large_bufsize": 135168 00:19:22.201 } 00:19:22.201 } 00:19:22.201 ] 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "subsystem": "sock", 00:19:22.201 "config": [ 00:19:22.201 { 00:19:22.201 "method": "sock_set_default_impl", 00:19:22.201 "params": { 00:19:22.201 "impl_name": "posix" 00:19:22.201 } 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "method": "sock_impl_set_options", 00:19:22.201 "params": { 00:19:22.201 "impl_name": "ssl", 00:19:22.201 "recv_buf_size": 4096, 00:19:22.201 "send_buf_size": 4096, 00:19:22.201 "enable_recv_pipe": true, 00:19:22.201 "enable_quickack": false, 00:19:22.201 "enable_placement_id": 0, 00:19:22.201 "enable_zerocopy_send_server": true, 00:19:22.201 "enable_zerocopy_send_client": false, 00:19:22.201 "zerocopy_threshold": 0, 00:19:22.201 "tls_version": 0, 00:19:22.201 "enable_ktls": false 00:19:22.201 } 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "method": "sock_impl_set_options", 00:19:22.201 "params": { 00:19:22.201 "impl_name": "posix", 00:19:22.201 "recv_buf_size": 2097152, 00:19:22.201 "send_buf_size": 2097152, 00:19:22.201 "enable_recv_pipe": true, 00:19:22.201 "enable_quickack": false, 00:19:22.201 "enable_placement_id": 0, 00:19:22.201 "enable_zerocopy_send_server": true, 00:19:22.201 "enable_zerocopy_send_client": false, 00:19:22.201 "zerocopy_threshold": 0, 00:19:22.201 "tls_version": 0, 00:19:22.201 "enable_ktls": false 00:19:22.201 } 00:19:22.201 } 00:19:22.201 ] 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "subsystem": "vmd", 00:19:22.201 "config": [] 00:19:22.201 }, 00:19:22.201 { 00:19:22.201 "subsystem": "accel", 00:19:22.202 "config": [ 00:19:22.202 { 00:19:22.202 "method": "accel_set_options", 00:19:22.202 "params": { 00:19:22.202 "small_cache_size": 128, 00:19:22.202 "large_cache_size": 16, 00:19:22.202 "task_count": 2048, 00:19:22.202 "sequence_count": 2048, 00:19:22.202 "buf_count": 2048 00:19:22.202 } 00:19:22.202 } 00:19:22.202 ] 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "subsystem": "bdev", 00:19:22.202 "config": [ 00:19:22.202 { 00:19:22.202 "method": "bdev_set_options", 00:19:22.202 "params": { 00:19:22.202 "bdev_io_pool_size": 65535, 00:19:22.202 "bdev_io_cache_size": 256, 00:19:22.202 "bdev_auto_examine": true, 00:19:22.202 "iobuf_small_cache_size": 128, 00:19:22.202 "iobuf_large_cache_size": 16 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_raid_set_options", 00:19:22.202 "params": { 00:19:22.202 "process_window_size_kb": 1024 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_iscsi_set_options", 00:19:22.202 "params": { 00:19:22.202 "timeout_sec": 30 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_nvme_set_options", 00:19:22.202 "params": { 00:19:22.202 "action_on_timeout": "none", 00:19:22.202 "timeout_us": 0, 00:19:22.202 "timeout_admin_us": 0, 00:19:22.202 "keep_alive_timeout_ms": 10000, 00:19:22.202 "arbitration_burst": 0, 00:19:22.202 "low_priority_weight": 0, 00:19:22.202 "medium_priority_weight": 0, 00:19:22.202 "high_priority_weight": 0, 00:19:22.202 "nvme_adminq_poll_period_us": 10000, 00:19:22.202 "nvme_ioq_poll_period_us": 0, 00:19:22.202 "io_queue_requests": 0, 00:19:22.202 "delay_cmd_submit": true, 00:19:22.202 "transport_retry_count": 4, 00:19:22.202 "bdev_retry_count": 3, 00:19:22.202 "transport_ack_timeout": 0, 00:19:22.202 "ctrlr_loss_timeout_sec": 0, 00:19:22.202 "reconnect_delay_sec": 0, 00:19:22.202 "fast_io_fail_timeout_sec": 0, 00:19:22.202 "disable_auto_failback": false, 00:19:22.202 "generate_uuids": false, 00:19:22.202 "transport_tos": 0, 00:19:22.202 "nvme_error_stat": false, 00:19:22.202 "rdma_srq_size": 0, 00:19:22.202 "io_path_stat": false, 00:19:22.202 "allow_accel_sequence": false, 00:19:22.202 "rdma_max_cq_size": 0, 00:19:22.202 "rdma_cm_event_timeout_ms": 0, 00:19:22.202 "dhchap_digests": [ 00:19:22.202 "sha256", 00:19:22.202 "sha384", 00:19:22.202 "sha512" 00:19:22.202 ], 00:19:22.202 "dhchap_dhgroups": [ 00:19:22.202 "null", 00:19:22.202 "ffdhe2048", 00:19:22.202 "ffdhe3072", 00:19:22.202 "ffdhe4096", 00:19:22.202 "ffdhe6144", 00:19:22.202 "ffdhe8192" 00:19:22.202 ] 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_nvme_set_hotplug", 00:19:22.202 "params": { 00:19:22.202 "period_us": 100000, 00:19:22.202 "enable": false 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_malloc_create", 00:19:22.202 "params": { 00:19:22.202 "name": "malloc0", 00:19:22.202 "num_blocks": 8192, 00:19:22.202 "block_size": 4096, 00:19:22.202 "physical_block_size": 4096, 00:19:22.202 "uuid": "44aa331e-9ed8-47e0-acd9-1b9cac1df5b0", 00:19:22.202 "optimal_io_boundary": 0 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "bdev_wait_for_examine" 00:19:22.202 } 00:19:22.202 ] 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "subsystem": "nbd", 00:19:22.202 "config": [] 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "subsystem": "scheduler", 00:19:22.202 "config": [ 00:19:22.202 { 00:19:22.202 "method": "framework_set_scheduler", 00:19:22.202 "params": { 00:19:22.202 "name": "static" 00:19:22.202 } 00:19:22.202 } 00:19:22.202 ] 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "subsystem": "nvmf", 00:19:22.202 "config": [ 00:19:22.202 { 00:19:22.202 "method": "nvmf_set_config", 00:19:22.202 "params": { 00:19:22.202 "discovery_filter": "match_any", 00:19:22.202 "admin_cmd_passthru": { 00:19:22.202 "identify_ctrlr": false 00:19:22.202 } 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_set_max_subsystems", 00:19:22.202 "params": { 00:19:22.202 "max_subsystems": 1024 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_set_crdt", 00:19:22.202 "params": { 00:19:22.202 "crdt1": 0, 00:19:22.202 "crdt2": 0, 00:19:22.202 "crdt3": 0 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_create_transport", 00:19:22.202 "params": { 00:19:22.202 "trtype": "TCP", 00:19:22.202 "max_queue_depth": 128, 00:19:22.202 "max_io_qpairs_per_ctrlr": 127, 00:19:22.202 "in_capsule_data_size": 4096, 00:19:22.202 "max_io_size": 131072, 00:19:22.202 "io_unit_size": 131072, 00:19:22.202 "max_aq_depth": 128, 00:19:22.202 "num_shared_buffers": 511, 00:19:22.202 "buf_cache_size": 4294967295, 00:19:22.202 "dif_insert_or_strip": false, 00:19:22.202 "zcopy": false, 00:19:22.202 "c2h_success": false, 00:19:22.202 "sock_priority": 0, 00:19:22.202 "abort_timeout_sec": 1, 00:19:22.202 "ack_timeout": 0, 00:19:22.202 "data_wr_pool_size": 0 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_create_subsystem", 00:19:22.202 "params": { 00:19:22.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.202 "allow_any_host": false, 00:19:22.202 "serial_number": "SPDK00000000000001", 00:19:22.202 "model_number": "SPDK bdev Controller", 00:19:22.202 "max_namespaces": 10, 00:19:22.202 "min_cntlid": 1, 00:19:22.202 "max_cntlid": 65519, 00:19:22.202 "ana_reporting": false 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_subsystem_add_host", 00:19:22.202 "params": { 00:19:22.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.202 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.202 "psk": "/tmp/tmp.V9RK6Bs3Ls" 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_subsystem_add_ns", 00:19:22.202 "params": { 00:19:22.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.202 "namespace": { 00:19:22.202 "nsid": 1, 00:19:22.202 "bdev_name": "malloc0", 00:19:22.202 "nguid": "44AA331E9ED847E0ACD91B9CAC1DF5B0", 00:19:22.202 "uuid": "44aa331e-9ed8-47e0-acd9-1b9cac1df5b0", 00:19:22.202 "no_auto_visible": false 00:19:22.202 } 00:19:22.202 } 00:19:22.202 }, 00:19:22.202 { 00:19:22.202 "method": "nvmf_subsystem_add_listener", 00:19:22.202 "params": { 00:19:22.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.202 "listen_address": { 00:19:22.202 "trtype": "TCP", 00:19:22.202 "adrfam": "IPv4", 00:19:22.202 "traddr": "10.0.0.2", 00:19:22.202 "trsvcid": "4420" 00:19:22.202 }, 00:19:22.202 "secure_channel": true 00:19:22.202 } 00:19:22.202 } 00:19:22.202 ] 00:19:22.202 } 00:19:22.202 ] 00:19:22.202 }' 00:19:22.202 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:22.464 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:22.464 "subsystems": [ 00:19:22.464 { 00:19:22.464 "subsystem": "keyring", 00:19:22.464 "config": [] 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "subsystem": "iobuf", 00:19:22.464 "config": [ 00:19:22.464 { 00:19:22.464 "method": "iobuf_set_options", 00:19:22.464 "params": { 00:19:22.464 "small_pool_count": 8192, 00:19:22.464 "large_pool_count": 1024, 00:19:22.464 "small_bufsize": 8192, 00:19:22.464 "large_bufsize": 135168 00:19:22.464 } 00:19:22.464 } 00:19:22.464 ] 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "subsystem": "sock", 00:19:22.464 "config": [ 00:19:22.464 { 00:19:22.464 "method": "sock_set_default_impl", 00:19:22.464 "params": { 00:19:22.464 "impl_name": "posix" 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "sock_impl_set_options", 00:19:22.464 "params": { 00:19:22.464 "impl_name": "ssl", 00:19:22.464 "recv_buf_size": 4096, 00:19:22.464 "send_buf_size": 4096, 00:19:22.464 "enable_recv_pipe": true, 00:19:22.464 "enable_quickack": false, 00:19:22.464 "enable_placement_id": 0, 00:19:22.464 "enable_zerocopy_send_server": true, 00:19:22.464 "enable_zerocopy_send_client": false, 00:19:22.464 "zerocopy_threshold": 0, 00:19:22.464 "tls_version": 0, 00:19:22.464 "enable_ktls": false 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "sock_impl_set_options", 00:19:22.464 "params": { 00:19:22.464 "impl_name": "posix", 00:19:22.464 "recv_buf_size": 2097152, 00:19:22.464 "send_buf_size": 2097152, 00:19:22.464 "enable_recv_pipe": true, 00:19:22.464 "enable_quickack": false, 00:19:22.464 "enable_placement_id": 0, 00:19:22.464 "enable_zerocopy_send_server": true, 00:19:22.464 "enable_zerocopy_send_client": false, 00:19:22.464 "zerocopy_threshold": 0, 00:19:22.464 "tls_version": 0, 00:19:22.464 "enable_ktls": false 00:19:22.464 } 00:19:22.464 } 00:19:22.464 ] 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "subsystem": "vmd", 00:19:22.464 "config": [] 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "subsystem": "accel", 00:19:22.464 "config": [ 00:19:22.464 { 00:19:22.464 "method": "accel_set_options", 00:19:22.464 "params": { 00:19:22.464 "small_cache_size": 128, 00:19:22.464 "large_cache_size": 16, 00:19:22.464 "task_count": 2048, 00:19:22.464 "sequence_count": 2048, 00:19:22.464 "buf_count": 2048 00:19:22.464 } 00:19:22.464 } 00:19:22.464 ] 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "subsystem": "bdev", 00:19:22.464 "config": [ 00:19:22.464 { 00:19:22.464 "method": "bdev_set_options", 00:19:22.464 "params": { 00:19:22.464 "bdev_io_pool_size": 65535, 00:19:22.464 "bdev_io_cache_size": 256, 00:19:22.464 "bdev_auto_examine": true, 00:19:22.464 "iobuf_small_cache_size": 128, 00:19:22.464 "iobuf_large_cache_size": 16 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "bdev_raid_set_options", 00:19:22.464 "params": { 00:19:22.464 "process_window_size_kb": 1024 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "bdev_iscsi_set_options", 00:19:22.464 "params": { 00:19:22.464 "timeout_sec": 30 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "bdev_nvme_set_options", 00:19:22.464 "params": { 00:19:22.464 "action_on_timeout": "none", 00:19:22.464 "timeout_us": 0, 00:19:22.464 "timeout_admin_us": 0, 00:19:22.464 "keep_alive_timeout_ms": 10000, 00:19:22.464 "arbitration_burst": 0, 00:19:22.464 "low_priority_weight": 0, 00:19:22.464 "medium_priority_weight": 0, 00:19:22.464 "high_priority_weight": 0, 00:19:22.464 "nvme_adminq_poll_period_us": 10000, 00:19:22.464 "nvme_ioq_poll_period_us": 0, 00:19:22.464 "io_queue_requests": 512, 00:19:22.464 "delay_cmd_submit": true, 00:19:22.464 "transport_retry_count": 4, 00:19:22.464 "bdev_retry_count": 3, 00:19:22.464 "transport_ack_timeout": 0, 00:19:22.464 "ctrlr_loss_timeout_sec": 0, 00:19:22.464 "reconnect_delay_sec": 0, 00:19:22.464 "fast_io_fail_timeout_sec": 0, 00:19:22.464 "disable_auto_failback": false, 00:19:22.464 "generate_uuids": false, 00:19:22.464 "transport_tos": 0, 00:19:22.464 "nvme_error_stat": false, 00:19:22.464 "rdma_srq_size": 0, 00:19:22.464 "io_path_stat": false, 00:19:22.464 "allow_accel_sequence": false, 00:19:22.464 "rdma_max_cq_size": 0, 00:19:22.464 "rdma_cm_event_timeout_ms": 0, 00:19:22.464 "dhchap_digests": [ 00:19:22.464 "sha256", 00:19:22.464 "sha384", 00:19:22.464 "sha512" 00:19:22.464 ], 00:19:22.464 "dhchap_dhgroups": [ 00:19:22.464 "null", 00:19:22.464 "ffdhe2048", 00:19:22.464 "ffdhe3072", 00:19:22.464 "ffdhe4096", 00:19:22.464 "ffdhe6144", 00:19:22.464 "ffdhe8192" 00:19:22.464 ] 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "bdev_nvme_attach_controller", 00:19:22.464 "params": { 00:19:22.464 "name": "TLSTEST", 00:19:22.464 "trtype": "TCP", 00:19:22.464 "adrfam": "IPv4", 00:19:22.464 "traddr": "10.0.0.2", 00:19:22.464 "trsvcid": "4420", 00:19:22.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.464 "prchk_reftag": false, 00:19:22.464 "prchk_guard": false, 00:19:22.464 "ctrlr_loss_timeout_sec": 0, 00:19:22.464 "reconnect_delay_sec": 0, 00:19:22.464 "fast_io_fail_timeout_sec": 0, 00:19:22.464 "psk": "/tmp/tmp.V9RK6Bs3Ls", 00:19:22.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.464 "hdgst": false, 00:19:22.464 "ddgst": false 00:19:22.464 } 00:19:22.464 }, 00:19:22.464 { 00:19:22.464 "method": "bdev_nvme_set_hotplug", 00:19:22.464 "params": { 00:19:22.465 "period_us": 100000, 00:19:22.465 "enable": false 00:19:22.465 } 00:19:22.465 }, 00:19:22.465 { 00:19:22.465 "method": "bdev_wait_for_examine" 00:19:22.465 } 00:19:22.465 ] 00:19:22.465 }, 00:19:22.465 { 00:19:22.465 "subsystem": "nbd", 00:19:22.465 "config": [] 00:19:22.465 } 00:19:22.465 ] 00:19:22.465 }' 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3727659 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3727659 ']' 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3727659 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727659 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727659' 00:19:22.465 killing process with pid 3727659 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3727659 00:19:22.465 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.465 00:19:22.465 Latency(us) 00:19:22.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.465 =================================================================================================================== 00:19:22.465 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.465 [2024-07-15 21:57:16.617293] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:22.465 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3727659 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3727285 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3727285 ']' 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3727285 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727285 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727285' 00:19:22.772 killing process with pid 3727285 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3727285 00:19:22.772 [2024-07-15 21:57:16.843958] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:22.772 21:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3727285 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:23.032 "subsystems": [ 00:19:23.032 { 00:19:23.032 "subsystem": "keyring", 00:19:23.032 "config": [] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "iobuf", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "iobuf_set_options", 00:19:23.032 "params": { 00:19:23.032 "small_pool_count": 8192, 00:19:23.032 "large_pool_count": 1024, 00:19:23.032 "small_bufsize": 8192, 00:19:23.032 "large_bufsize": 135168 00:19:23.032 } 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "sock", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "sock_set_default_impl", 00:19:23.032 "params": { 00:19:23.032 "impl_name": "posix" 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "sock_impl_set_options", 00:19:23.032 "params": { 00:19:23.032 "impl_name": "ssl", 00:19:23.032 "recv_buf_size": 4096, 00:19:23.032 "send_buf_size": 4096, 00:19:23.032 "enable_recv_pipe": true, 00:19:23.032 "enable_quickack": false, 00:19:23.032 "enable_placement_id": 0, 00:19:23.032 "enable_zerocopy_send_server": true, 00:19:23.032 "enable_zerocopy_send_client": false, 00:19:23.032 "zerocopy_threshold": 0, 00:19:23.032 "tls_version": 0, 00:19:23.032 "enable_ktls": false 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "sock_impl_set_options", 00:19:23.032 "params": { 00:19:23.032 "impl_name": "posix", 00:19:23.032 "recv_buf_size": 2097152, 00:19:23.032 "send_buf_size": 2097152, 00:19:23.032 "enable_recv_pipe": true, 00:19:23.032 "enable_quickack": false, 00:19:23.032 "enable_placement_id": 0, 00:19:23.032 "enable_zerocopy_send_server": true, 00:19:23.032 "enable_zerocopy_send_client": false, 00:19:23.032 "zerocopy_threshold": 0, 00:19:23.032 "tls_version": 0, 00:19:23.032 "enable_ktls": false 00:19:23.032 } 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "vmd", 00:19:23.032 "config": [] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "accel", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "accel_set_options", 00:19:23.032 "params": { 00:19:23.032 "small_cache_size": 128, 00:19:23.032 "large_cache_size": 16, 00:19:23.032 "task_count": 2048, 00:19:23.032 "sequence_count": 2048, 00:19:23.032 "buf_count": 2048 00:19:23.032 } 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "bdev", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "bdev_set_options", 00:19:23.032 "params": { 00:19:23.032 "bdev_io_pool_size": 65535, 00:19:23.032 "bdev_io_cache_size": 256, 00:19:23.032 "bdev_auto_examine": true, 00:19:23.032 "iobuf_small_cache_size": 128, 00:19:23.032 "iobuf_large_cache_size": 16 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_raid_set_options", 00:19:23.032 "params": { 00:19:23.032 "process_window_size_kb": 1024 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_iscsi_set_options", 00:19:23.032 "params": { 00:19:23.032 "timeout_sec": 30 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_nvme_set_options", 00:19:23.032 "params": { 00:19:23.032 "action_on_timeout": "none", 00:19:23.032 "timeout_us": 0, 00:19:23.032 "timeout_admin_us": 0, 00:19:23.032 "keep_alive_timeout_ms": 10000, 00:19:23.032 "arbitration_burst": 0, 00:19:23.032 "low_priority_weight": 0, 00:19:23.032 "medium_priority_weight": 0, 00:19:23.032 "high_priority_weight": 0, 00:19:23.032 "nvme_adminq_poll_period_us": 10000, 00:19:23.032 "nvme_ioq_poll_period_us": 0, 00:19:23.032 "io_queue_requests": 0, 00:19:23.032 "delay_cmd_submit": true, 00:19:23.032 "transport_retry_count": 4, 00:19:23.032 "bdev_retry_count": 3, 00:19:23.032 "transport_ack_timeout": 0, 00:19:23.032 "ctrlr_loss_timeout_sec": 0, 00:19:23.032 "reconnect_delay_sec": 0, 00:19:23.032 "fast_io_fail_timeout_sec": 0, 00:19:23.032 "disable_auto_failback": false, 00:19:23.032 "generate_uuids": false, 00:19:23.032 "transport_tos": 0, 00:19:23.032 "nvme_error_stat": false, 00:19:23.032 "rdma_srq_size": 0, 00:19:23.032 "io_path_stat": false, 00:19:23.032 "allow_accel_sequence": false, 00:19:23.032 "rdma_max_cq_size": 0, 00:19:23.032 "rdma_cm_event_timeout_ms": 0, 00:19:23.032 "dhchap_digests": [ 00:19:23.032 "sha256", 00:19:23.032 "sha384", 00:19:23.032 "sha512" 00:19:23.032 ], 00:19:23.032 "dhchap_dhgroups": [ 00:19:23.032 "null", 00:19:23.032 "ffdhe2048", 00:19:23.032 "ffdhe3072", 00:19:23.032 "ffdhe4096", 00:19:23.032 "ffdhe6144", 00:19:23.032 "ffdhe8192" 00:19:23.032 ] 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_nvme_set_hotplug", 00:19:23.032 "params": { 00:19:23.032 "period_us": 100000, 00:19:23.032 "enable": false 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_malloc_create", 00:19:23.032 "params": { 00:19:23.032 "name": "malloc0", 00:19:23.032 "num_blocks": 8192, 00:19:23.032 "block_size": 4096, 00:19:23.032 "physical_block_size": 4096, 00:19:23.032 "uuid": "44aa331e-9ed8-47e0-acd9-1b9cac1df5b0", 00:19:23.032 "optimal_io_boundary": 0 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "bdev_wait_for_examine" 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "nbd", 00:19:23.032 "config": [] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "scheduler", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "framework_set_scheduler", 00:19:23.032 "params": { 00:19:23.032 "name": "static" 00:19:23.032 } 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "subsystem": "nvmf", 00:19:23.032 "config": [ 00:19:23.032 { 00:19:23.032 "method": "nvmf_set_config", 00:19:23.032 "params": { 00:19:23.032 "discovery_filter": "match_any", 00:19:23.032 "admin_cmd_passthru": { 00:19:23.032 "identify_ctrlr": false 00:19:23.032 } 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_set_max_subsystems", 00:19:23.032 "params": { 00:19:23.032 "max_subsystems": 1024 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_set_crdt", 00:19:23.032 "params": { 00:19:23.032 "crdt1": 0, 00:19:23.032 "crdt2": 0, 00:19:23.032 "crdt3": 0 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_create_transport", 00:19:23.032 "params": { 00:19:23.032 "trtype": "TCP", 00:19:23.032 "max_queue_depth": 128, 00:19:23.032 "max_io_qpairs_per_ctrlr": 127, 00:19:23.032 "in_capsule_data_size": 4096, 00:19:23.032 "max_io_size": 131072, 00:19:23.032 "io_unit_size": 131072, 00:19:23.032 "max_aq_depth": 128, 00:19:23.032 "num_shared_buffers": 511, 00:19:23.032 "buf_cache_size": 4294967295, 00:19:23.032 "dif_insert_or_strip": false, 00:19:23.032 "zcopy": false, 00:19:23.032 "c2h_success": false, 00:19:23.032 "sock_priority": 0, 00:19:23.032 "abort_timeout_sec": 1, 00:19:23.032 "ack_timeout": 0, 00:19:23.032 "data_wr_pool_size": 0 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_create_subsystem", 00:19:23.032 "params": { 00:19:23.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.032 "allow_any_host": false, 00:19:23.032 "serial_number": "SPDK00000000000001", 00:19:23.032 "model_number": "SPDK bdev Controller", 00:19:23.032 "max_namespaces": 10, 00:19:23.032 "min_cntlid": 1, 00:19:23.032 "max_cntlid": 65519, 00:19:23.032 "ana_reporting": false 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_subsystem_add_host", 00:19:23.032 "params": { 00:19:23.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.032 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.032 "psk": "/tmp/tmp.V9RK6Bs3Ls" 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_subsystem_add_ns", 00:19:23.032 "params": { 00:19:23.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.032 "namespace": { 00:19:23.032 "nsid": 1, 00:19:23.032 "bdev_name": "malloc0", 00:19:23.032 "nguid": "44AA331E9ED847E0ACD91B9CAC1DF5B0", 00:19:23.032 "uuid": "44aa331e-9ed8-47e0-acd9-1b9cac1df5b0", 00:19:23.032 "no_auto_visible": false 00:19:23.032 } 00:19:23.032 } 00:19:23.032 }, 00:19:23.032 { 00:19:23.032 "method": "nvmf_subsystem_add_listener", 00:19:23.032 "params": { 00:19:23.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.032 "listen_address": { 00:19:23.032 "trtype": "TCP", 00:19:23.032 "adrfam": "IPv4", 00:19:23.032 "traddr": "10.0.0.2", 00:19:23.032 "trsvcid": "4420" 00:19:23.032 }, 00:19:23.032 "secure_channel": true 00:19:23.032 } 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 } 00:19:23.032 ] 00:19:23.032 }' 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3728016 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3728016 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3728016 ']' 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.032 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.032 [2024-07-15 21:57:17.091572] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:23.033 [2024-07-15 21:57:17.091618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.033 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.033 [2024-07-15 21:57:17.148000] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.033 [2024-07-15 21:57:17.226564] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.033 [2024-07-15 21:57:17.226598] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.033 [2024-07-15 21:57:17.226605] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.033 [2024-07-15 21:57:17.226611] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.033 [2024-07-15 21:57:17.226616] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.033 [2024-07-15 21:57:17.226663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.291 [2024-07-15 21:57:17.429919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.291 [2024-07-15 21:57:17.445886] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:23.291 [2024-07-15 21:57:17.461949] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.291 [2024-07-15 21:57:17.470526] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3728244 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3728244 /var/tmp/bdevperf.sock 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3728244 ']' 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:23.859 "subsystems": [ 00:19:23.859 { 00:19:23.859 "subsystem": "keyring", 00:19:23.859 "config": [] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "iobuf", 00:19:23.859 "config": [ 00:19:23.859 { 00:19:23.859 "method": "iobuf_set_options", 00:19:23.859 "params": { 00:19:23.859 "small_pool_count": 8192, 00:19:23.859 "large_pool_count": 1024, 00:19:23.859 "small_bufsize": 8192, 00:19:23.859 "large_bufsize": 135168 00:19:23.859 } 00:19:23.859 } 00:19:23.859 ] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "sock", 00:19:23.859 "config": [ 00:19:23.859 { 00:19:23.859 "method": "sock_set_default_impl", 00:19:23.859 "params": { 00:19:23.859 "impl_name": "posix" 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "sock_impl_set_options", 00:19:23.859 "params": { 00:19:23.859 "impl_name": "ssl", 00:19:23.859 "recv_buf_size": 4096, 00:19:23.859 "send_buf_size": 4096, 00:19:23.859 "enable_recv_pipe": true, 00:19:23.859 "enable_quickack": false, 00:19:23.859 "enable_placement_id": 0, 00:19:23.859 "enable_zerocopy_send_server": true, 00:19:23.859 "enable_zerocopy_send_client": false, 00:19:23.859 "zerocopy_threshold": 0, 00:19:23.859 "tls_version": 0, 00:19:23.859 "enable_ktls": false 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "sock_impl_set_options", 00:19:23.859 "params": { 00:19:23.859 "impl_name": "posix", 00:19:23.859 "recv_buf_size": 2097152, 00:19:23.859 "send_buf_size": 2097152, 00:19:23.859 "enable_recv_pipe": true, 00:19:23.859 "enable_quickack": false, 00:19:23.859 "enable_placement_id": 0, 00:19:23.859 "enable_zerocopy_send_server": true, 00:19:23.859 "enable_zerocopy_send_client": false, 00:19:23.859 "zerocopy_threshold": 0, 00:19:23.859 "tls_version": 0, 00:19:23.859 "enable_ktls": false 00:19:23.859 } 00:19:23.859 } 00:19:23.859 ] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "vmd", 00:19:23.859 "config": [] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "accel", 00:19:23.859 "config": [ 00:19:23.859 { 00:19:23.859 "method": "accel_set_options", 00:19:23.859 "params": { 00:19:23.859 "small_cache_size": 128, 00:19:23.859 "large_cache_size": 16, 00:19:23.859 "task_count": 2048, 00:19:23.859 "sequence_count": 2048, 00:19:23.859 "buf_count": 2048 00:19:23.859 } 00:19:23.859 } 00:19:23.859 ] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "bdev", 00:19:23.859 "config": [ 00:19:23.859 { 00:19:23.859 "method": "bdev_set_options", 00:19:23.859 "params": { 00:19:23.859 "bdev_io_pool_size": 65535, 00:19:23.859 "bdev_io_cache_size": 256, 00:19:23.859 "bdev_auto_examine": true, 00:19:23.859 "iobuf_small_cache_size": 128, 00:19:23.859 "iobuf_large_cache_size": 16 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_raid_set_options", 00:19:23.859 "params": { 00:19:23.859 "process_window_size_kb": 1024 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_iscsi_set_options", 00:19:23.859 "params": { 00:19:23.859 "timeout_sec": 30 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_nvme_set_options", 00:19:23.859 "params": { 00:19:23.859 "action_on_timeout": "none", 00:19:23.859 "timeout_us": 0, 00:19:23.859 "timeout_admin_us": 0, 00:19:23.859 "keep_alive_timeout_ms": 10000, 00:19:23.859 "arbitration_burst": 0, 00:19:23.859 "low_priority_weight": 0, 00:19:23.859 "medium_priority_weight": 0, 00:19:23.859 "high_priority_weight": 0, 00:19:23.859 "nvme_adminq_poll_period_us": 10000, 00:19:23.859 "nvme_ioq_poll_period_us": 0, 00:19:23.859 "io_queue_requests": 512, 00:19:23.859 "delay_cmd_submit": true, 00:19:23.859 "transport_retry_count": 4, 00:19:23.859 "bdev_retry_count": 3, 00:19:23.859 "transport_ack_timeout": 0, 00:19:23.859 "ctrlr_loss_timeout_sec": 0, 00:19:23.859 "reconnect_delay_sec": 0, 00:19:23.859 "fast_io_fail_timeout_sec": 0, 00:19:23.859 "disable_auto_failback": false, 00:19:23.859 "generate_uuids": false, 00:19:23.859 "transport_tos": 0, 00:19:23.859 "nvme_error_stat": false, 00:19:23.859 "rdma_srq_size": 0, 00:19:23.859 "io_path_stat": false, 00:19:23.859 "allow_accel_sequence": false, 00:19:23.859 "rdma_max_cq_size": 0, 00:19:23.859 "rdma_cm_event_timeout_ms": 0, 00:19:23.859 "dhchap_digests": [ 00:19:23.859 "sha256", 00:19:23.859 "sha384", 00:19:23.859 "sha512" 00:19:23.859 ], 00:19:23.859 "dhchap_dhgroups": [ 00:19:23.859 "null", 00:19:23.859 "ffdhe2048", 00:19:23.859 "ffdhe3072", 00:19:23.859 "ffdhe4096", 00:19:23.859 "ffdhe6144", 00:19:23.859 "ffdhe8192" 00:19:23.859 ] 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_nvme_attach_controller", 00:19:23.859 "params": { 00:19:23.859 "name": "TLSTEST", 00:19:23.859 "trtype": "TCP", 00:19:23.859 "adrfam": "IPv4", 00:19:23.859 "traddr": "10.0.0.2", 00:19:23.859 "trsvcid": "4420", 00:19:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.859 "prchk_reftag": false, 00:19:23.859 "prchk_guard": false, 00:19:23.859 "ctrlr_loss_timeout_sec": 0, 00:19:23.859 "reconnect_delay_sec": 0, 00:19:23.859 "fast_io_fail_timeout_sec": 0, 00:19:23.859 "psk": "/tmp/tmp.V9RK6Bs3Ls", 00:19:23.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.859 "hdgst": false, 00:19:23.859 "ddgst": false 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_nvme_set_hotplug", 00:19:23.859 "params": { 00:19:23.859 "period_us": 100000, 00:19:23.859 "enable": false 00:19:23.859 } 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "method": "bdev_wait_for_examine" 00:19:23.859 } 00:19:23.859 ] 00:19:23.859 }, 00:19:23.859 { 00:19:23.859 "subsystem": "nbd", 00:19:23.859 "config": [] 00:19:23.859 } 00:19:23.859 ] 00:19:23.859 }' 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.859 21:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.859 [2024-07-15 21:57:17.972644] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:23.859 [2024-07-15 21:57:17.972691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728244 ] 00:19:23.859 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.859 [2024-07-15 21:57:18.023586] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.118 [2024-07-15 21:57:18.101382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.118 [2024-07-15 21:57:18.242769] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.118 [2024-07-15 21:57:18.242860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.684 21:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.684 21:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:24.684 21:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:24.685 Running I/O for 10 seconds... 00:19:36.886 00:19:36.886 Latency(us) 00:19:36.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.886 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.886 Verification LBA range: start 0x0 length 0x2000 00:19:36.886 TLSTESTn1 : 10.02 5636.67 22.02 0.00 0.00 22671.01 4786.98 47413.87 00:19:36.886 =================================================================================================================== 00:19:36.886 Total : 5636.67 22.02 0.00 0.00 22671.01 4786.98 47413.87 00:19:36.886 0 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3728244 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3728244 ']' 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3728244 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728244 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728244' 00:19:36.886 killing process with pid 3728244 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3728244 00:19:36.886 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.886 00:19:36.886 Latency(us) 00:19:36.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.886 =================================================================================================================== 00:19:36.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.886 [2024-07-15 21:57:28.975321] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:36.886 21:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3728244 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3728016 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3728016 ']' 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3728016 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728016 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.886 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728016' 00:19:36.886 killing process with pid 3728016 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3728016 00:19:36.887 [2024-07-15 21:57:29.201076] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3728016 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3730103 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3730103 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3730103 ']' 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.887 21:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.887 [2024-07-15 21:57:29.443028] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:36.887 [2024-07-15 21:57:29.443073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.887 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.887 [2024-07-15 21:57:29.498516] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.887 [2024-07-15 21:57:29.576775] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.887 [2024-07-15 21:57:29.576810] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.887 [2024-07-15 21:57:29.576817] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.887 [2024-07-15 21:57:29.576824] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.887 [2024-07-15 21:57:29.576829] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.887 [2024-07-15 21:57:29.576849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.V9RK6Bs3Ls 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.V9RK6Bs3Ls 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.887 [2024-07-15 21:57:30.455291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.887 [2024-07-15 21:57:30.812193] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.887 [2024-07-15 21:57:30.812376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.887 malloc0 00:19:36.887 21:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.V9RK6Bs3Ls 00:19:37.145 [2024-07-15 21:57:31.321649] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3730361 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3730361 /var/tmp/bdevperf.sock 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3730361 ']' 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.145 21:57:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.145 [2024-07-15 21:57:31.369619] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:37.145 [2024-07-15 21:57:31.369668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730361 ] 00:19:37.403 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.403 [2024-07-15 21:57:31.425517] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.403 [2024-07-15 21:57:31.498842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.971 21:57:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.971 21:57:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.971 21:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V9RK6Bs3Ls 00:19:38.228 21:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:38.486 [2024-07-15 21:57:32.514294] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.486 nvme0n1 00:19:38.486 21:57:32 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.486 Running I/O for 1 seconds... 00:19:39.861 00:19:39.861 Latency(us) 00:19:39.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.861 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.861 Verification LBA range: start 0x0 length 0x2000 00:19:39.861 nvme0n1 : 1.02 5298.44 20.70 0.00 0.00 23957.40 7094.98 59267.34 00:19:39.861 =================================================================================================================== 00:19:39.861 Total : 5298.44 20.70 0.00 0.00 23957.40 7094.98 59267.34 00:19:39.861 0 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3730361 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3730361 ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3730361 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3730361 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3730361' 00:19:39.861 killing process with pid 3730361 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3730361 00:19:39.861 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.861 00:19:39.861 Latency(us) 00:19:39.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.861 =================================================================================================================== 00:19:39.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3730361 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3730103 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3730103 ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3730103 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3730103 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3730103' 00:19:39.861 killing process with pid 3730103 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3730103 00:19:39.861 [2024-07-15 21:57:33.997919] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:39.861 21:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3730103 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3730830 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3730830 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3730830 ']' 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.120 21:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.120 [2024-07-15 21:57:34.242651] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:40.120 [2024-07-15 21:57:34.242696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.120 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.120 [2024-07-15 21:57:34.300246] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.378 [2024-07-15 21:57:34.368964] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.378 [2024-07-15 21:57:34.369003] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.378 [2024-07-15 21:57:34.369010] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.378 [2024-07-15 21:57:34.369016] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.378 [2024-07-15 21:57:34.369021] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.378 [2024-07-15 21:57:34.369060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 [2024-07-15 21:57:35.079938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.945 malloc0 00:19:40.945 [2024-07-15 21:57:35.108127] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.945 [2024-07-15 21:57:35.108311] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3731078 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3731078 /var/tmp/bdevperf.sock 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3731078 ']' 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.945 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 [2024-07-15 21:57:35.182205] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:40.945 [2024-07-15 21:57:35.182252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731078 ] 00:19:41.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.203 [2024-07-15 21:57:35.238618] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.203 [2024-07-15 21:57:35.317550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.769 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.769 21:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:41.769 21:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.V9RK6Bs3Ls 00:19:42.027 21:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:42.285 [2024-07-15 21:57:36.317659] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.285 nvme0n1 00:19:42.285 21:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:42.285 Running I/O for 1 seconds... 00:19:43.657 00:19:43.657 Latency(us) 00:19:43.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.657 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:43.657 Verification LBA range: start 0x0 length 0x2000 00:19:43.657 nvme0n1 : 1.02 5320.91 20.78 0.00 0.00 23856.98 6810.05 49693.38 00:19:43.657 =================================================================================================================== 00:19:43.657 Total : 5320.91 20.78 0.00 0.00 23856.98 6810.05 49693.38 00:19:43.657 0 00:19:43.657 21:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:43.657 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.657 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.657 21:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:43.657 "subsystems": [ 00:19:43.657 { 00:19:43.657 "subsystem": "keyring", 00:19:43.657 "config": [ 00:19:43.657 { 00:19:43.657 "method": "keyring_file_add_key", 00:19:43.657 "params": { 00:19:43.657 "name": "key0", 00:19:43.657 "path": "/tmp/tmp.V9RK6Bs3Ls" 00:19:43.657 } 00:19:43.657 } 00:19:43.657 ] 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "subsystem": "iobuf", 00:19:43.657 "config": [ 00:19:43.657 { 00:19:43.657 "method": "iobuf_set_options", 00:19:43.657 "params": { 00:19:43.657 "small_pool_count": 8192, 00:19:43.657 "large_pool_count": 1024, 00:19:43.657 "small_bufsize": 8192, 00:19:43.657 "large_bufsize": 135168 00:19:43.657 } 00:19:43.657 } 00:19:43.657 ] 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "subsystem": "sock", 00:19:43.657 "config": [ 00:19:43.657 { 00:19:43.657 "method": "sock_set_default_impl", 00:19:43.657 "params": { 00:19:43.657 "impl_name": "posix" 00:19:43.657 } 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "method": "sock_impl_set_options", 00:19:43.657 "params": { 00:19:43.657 "impl_name": "ssl", 00:19:43.657 "recv_buf_size": 4096, 00:19:43.657 "send_buf_size": 4096, 00:19:43.657 "enable_recv_pipe": true, 00:19:43.657 "enable_quickack": false, 00:19:43.657 "enable_placement_id": 0, 00:19:43.657 "enable_zerocopy_send_server": true, 00:19:43.657 "enable_zerocopy_send_client": false, 00:19:43.657 "zerocopy_threshold": 0, 00:19:43.657 "tls_version": 0, 00:19:43.657 "enable_ktls": false 00:19:43.657 } 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "method": "sock_impl_set_options", 00:19:43.657 "params": { 00:19:43.657 "impl_name": "posix", 00:19:43.657 "recv_buf_size": 2097152, 00:19:43.657 "send_buf_size": 2097152, 00:19:43.657 "enable_recv_pipe": true, 00:19:43.657 "enable_quickack": false, 00:19:43.657 "enable_placement_id": 0, 00:19:43.657 "enable_zerocopy_send_server": true, 00:19:43.657 "enable_zerocopy_send_client": false, 00:19:43.657 "zerocopy_threshold": 0, 00:19:43.657 "tls_version": 0, 00:19:43.657 "enable_ktls": false 00:19:43.657 } 00:19:43.657 } 00:19:43.657 ] 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "subsystem": "vmd", 00:19:43.657 "config": [] 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "subsystem": "accel", 00:19:43.657 "config": [ 00:19:43.657 { 00:19:43.657 "method": "accel_set_options", 00:19:43.657 "params": { 00:19:43.657 "small_cache_size": 128, 00:19:43.657 "large_cache_size": 16, 00:19:43.657 "task_count": 2048, 00:19:43.657 "sequence_count": 2048, 00:19:43.657 "buf_count": 2048 00:19:43.657 } 00:19:43.657 } 00:19:43.657 ] 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "subsystem": "bdev", 00:19:43.657 "config": [ 00:19:43.657 { 00:19:43.657 "method": "bdev_set_options", 00:19:43.657 "params": { 00:19:43.657 "bdev_io_pool_size": 65535, 00:19:43.657 "bdev_io_cache_size": 256, 00:19:43.657 "bdev_auto_examine": true, 00:19:43.657 "iobuf_small_cache_size": 128, 00:19:43.657 "iobuf_large_cache_size": 16 00:19:43.657 } 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "method": "bdev_raid_set_options", 00:19:43.657 "params": { 00:19:43.657 "process_window_size_kb": 1024 00:19:43.657 } 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "method": "bdev_iscsi_set_options", 00:19:43.657 "params": { 00:19:43.657 "timeout_sec": 30 00:19:43.657 } 00:19:43.657 }, 00:19:43.657 { 00:19:43.657 "method": "bdev_nvme_set_options", 00:19:43.657 "params": { 00:19:43.657 "action_on_timeout": "none", 00:19:43.657 "timeout_us": 0, 00:19:43.658 "timeout_admin_us": 0, 00:19:43.658 "keep_alive_timeout_ms": 10000, 00:19:43.658 "arbitration_burst": 0, 00:19:43.658 "low_priority_weight": 0, 00:19:43.658 "medium_priority_weight": 0, 00:19:43.658 "high_priority_weight": 0, 00:19:43.658 "nvme_adminq_poll_period_us": 10000, 00:19:43.658 "nvme_ioq_poll_period_us": 0, 00:19:43.658 "io_queue_requests": 0, 00:19:43.658 "delay_cmd_submit": true, 00:19:43.658 "transport_retry_count": 4, 00:19:43.658 "bdev_retry_count": 3, 00:19:43.658 "transport_ack_timeout": 0, 00:19:43.658 "ctrlr_loss_timeout_sec": 0, 00:19:43.658 "reconnect_delay_sec": 0, 00:19:43.658 "fast_io_fail_timeout_sec": 0, 00:19:43.658 "disable_auto_failback": false, 00:19:43.658 "generate_uuids": false, 00:19:43.658 "transport_tos": 0, 00:19:43.658 "nvme_error_stat": false, 00:19:43.658 "rdma_srq_size": 0, 00:19:43.658 "io_path_stat": false, 00:19:43.658 "allow_accel_sequence": false, 00:19:43.658 "rdma_max_cq_size": 0, 00:19:43.658 "rdma_cm_event_timeout_ms": 0, 00:19:43.658 "dhchap_digests": [ 00:19:43.658 "sha256", 00:19:43.658 "sha384", 00:19:43.658 "sha512" 00:19:43.658 ], 00:19:43.658 "dhchap_dhgroups": [ 00:19:43.658 "null", 00:19:43.658 "ffdhe2048", 00:19:43.658 "ffdhe3072", 00:19:43.658 "ffdhe4096", 00:19:43.658 "ffdhe6144", 00:19:43.658 "ffdhe8192" 00:19:43.658 ] 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "bdev_nvme_set_hotplug", 00:19:43.658 "params": { 00:19:43.658 "period_us": 100000, 00:19:43.658 "enable": false 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "bdev_malloc_create", 00:19:43.658 "params": { 00:19:43.658 "name": "malloc0", 00:19:43.658 "num_blocks": 8192, 00:19:43.658 "block_size": 4096, 00:19:43.658 "physical_block_size": 4096, 00:19:43.658 "uuid": "b7861584-f753-47cc-a672-3dc708e04420", 00:19:43.658 "optimal_io_boundary": 0 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "bdev_wait_for_examine" 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "subsystem": "nbd", 00:19:43.658 "config": [] 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "subsystem": "scheduler", 00:19:43.658 "config": [ 00:19:43.658 { 00:19:43.658 "method": "framework_set_scheduler", 00:19:43.658 "params": { 00:19:43.658 "name": "static" 00:19:43.658 } 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "subsystem": "nvmf", 00:19:43.658 "config": [ 00:19:43.658 { 00:19:43.658 "method": "nvmf_set_config", 00:19:43.658 "params": { 00:19:43.658 "discovery_filter": "match_any", 00:19:43.658 "admin_cmd_passthru": { 00:19:43.658 "identify_ctrlr": false 00:19:43.658 } 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_set_max_subsystems", 00:19:43.658 "params": { 00:19:43.658 "max_subsystems": 1024 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_set_crdt", 00:19:43.658 "params": { 00:19:43.658 "crdt1": 0, 00:19:43.658 "crdt2": 0, 00:19:43.658 "crdt3": 0 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_create_transport", 00:19:43.658 "params": { 00:19:43.658 "trtype": "TCP", 00:19:43.658 "max_queue_depth": 128, 00:19:43.658 "max_io_qpairs_per_ctrlr": 127, 00:19:43.658 "in_capsule_data_size": 4096, 00:19:43.658 "max_io_size": 131072, 00:19:43.658 "io_unit_size": 131072, 00:19:43.658 "max_aq_depth": 128, 00:19:43.658 "num_shared_buffers": 511, 00:19:43.658 "buf_cache_size": 4294967295, 00:19:43.658 "dif_insert_or_strip": false, 00:19:43.658 "zcopy": false, 00:19:43.658 "c2h_success": false, 00:19:43.658 "sock_priority": 0, 00:19:43.658 "abort_timeout_sec": 1, 00:19:43.658 "ack_timeout": 0, 00:19:43.658 "data_wr_pool_size": 0 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_create_subsystem", 00:19:43.658 "params": { 00:19:43.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.658 "allow_any_host": false, 00:19:43.658 "serial_number": "00000000000000000000", 00:19:43.658 "model_number": "SPDK bdev Controller", 00:19:43.658 "max_namespaces": 32, 00:19:43.658 "min_cntlid": 1, 00:19:43.658 "max_cntlid": 65519, 00:19:43.658 "ana_reporting": false 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_subsystem_add_host", 00:19:43.658 "params": { 00:19:43.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.658 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.658 "psk": "key0" 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_subsystem_add_ns", 00:19:43.658 "params": { 00:19:43.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.658 "namespace": { 00:19:43.658 "nsid": 1, 00:19:43.658 "bdev_name": "malloc0", 00:19:43.658 "nguid": "B7861584F75347CCA6723DC708E04420", 00:19:43.658 "uuid": "b7861584-f753-47cc-a672-3dc708e04420", 00:19:43.658 "no_auto_visible": false 00:19:43.658 } 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "nvmf_subsystem_add_listener", 00:19:43.658 "params": { 00:19:43.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.658 "listen_address": { 00:19:43.658 "trtype": "TCP", 00:19:43.658 "adrfam": "IPv4", 00:19:43.658 "traddr": "10.0.0.2", 00:19:43.658 "trsvcid": "4420" 00:19:43.658 }, 00:19:43.658 "secure_channel": false, 00:19:43.658 "sock_impl": "ssl" 00:19:43.658 } 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 }' 00:19:43.658 21:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:43.658 21:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:43.658 "subsystems": [ 00:19:43.658 { 00:19:43.658 "subsystem": "keyring", 00:19:43.658 "config": [ 00:19:43.658 { 00:19:43.658 "method": "keyring_file_add_key", 00:19:43.658 "params": { 00:19:43.658 "name": "key0", 00:19:43.658 "path": "/tmp/tmp.V9RK6Bs3Ls" 00:19:43.658 } 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "subsystem": "iobuf", 00:19:43.658 "config": [ 00:19:43.658 { 00:19:43.658 "method": "iobuf_set_options", 00:19:43.658 "params": { 00:19:43.658 "small_pool_count": 8192, 00:19:43.658 "large_pool_count": 1024, 00:19:43.658 "small_bufsize": 8192, 00:19:43.658 "large_bufsize": 135168 00:19:43.658 } 00:19:43.658 } 00:19:43.658 ] 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "subsystem": "sock", 00:19:43.658 "config": [ 00:19:43.658 { 00:19:43.658 "method": "sock_set_default_impl", 00:19:43.658 "params": { 00:19:43.658 "impl_name": "posix" 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "sock_impl_set_options", 00:19:43.658 "params": { 00:19:43.658 "impl_name": "ssl", 00:19:43.658 "recv_buf_size": 4096, 00:19:43.658 "send_buf_size": 4096, 00:19:43.658 "enable_recv_pipe": true, 00:19:43.658 "enable_quickack": false, 00:19:43.658 "enable_placement_id": 0, 00:19:43.658 "enable_zerocopy_send_server": true, 00:19:43.658 "enable_zerocopy_send_client": false, 00:19:43.658 "zerocopy_threshold": 0, 00:19:43.658 "tls_version": 0, 00:19:43.658 "enable_ktls": false 00:19:43.658 } 00:19:43.658 }, 00:19:43.658 { 00:19:43.658 "method": "sock_impl_set_options", 00:19:43.658 "params": { 00:19:43.658 "impl_name": "posix", 00:19:43.658 "recv_buf_size": 2097152, 00:19:43.658 "send_buf_size": 2097152, 00:19:43.658 "enable_recv_pipe": true, 00:19:43.658 "enable_quickack": false, 00:19:43.658 "enable_placement_id": 0, 00:19:43.658 "enable_zerocopy_send_server": true, 00:19:43.658 "enable_zerocopy_send_client": false, 00:19:43.658 "zerocopy_threshold": 0, 00:19:43.658 "tls_version": 0, 00:19:43.659 "enable_ktls": false 00:19:43.659 } 00:19:43.659 } 00:19:43.659 ] 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "subsystem": "vmd", 00:19:43.659 "config": [] 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "subsystem": "accel", 00:19:43.659 "config": [ 00:19:43.659 { 00:19:43.659 "method": "accel_set_options", 00:19:43.659 "params": { 00:19:43.659 "small_cache_size": 128, 00:19:43.659 "large_cache_size": 16, 00:19:43.659 "task_count": 2048, 00:19:43.659 "sequence_count": 2048, 00:19:43.659 "buf_count": 2048 00:19:43.659 } 00:19:43.659 } 00:19:43.659 ] 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "subsystem": "bdev", 00:19:43.659 "config": [ 00:19:43.659 { 00:19:43.659 "method": "bdev_set_options", 00:19:43.659 "params": { 00:19:43.659 "bdev_io_pool_size": 65535, 00:19:43.659 "bdev_io_cache_size": 256, 00:19:43.659 "bdev_auto_examine": true, 00:19:43.659 "iobuf_small_cache_size": 128, 00:19:43.659 "iobuf_large_cache_size": 16 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_raid_set_options", 00:19:43.659 "params": { 00:19:43.659 "process_window_size_kb": 1024 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_iscsi_set_options", 00:19:43.659 "params": { 00:19:43.659 "timeout_sec": 30 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_nvme_set_options", 00:19:43.659 "params": { 00:19:43.659 "action_on_timeout": "none", 00:19:43.659 "timeout_us": 0, 00:19:43.659 "timeout_admin_us": 0, 00:19:43.659 "keep_alive_timeout_ms": 10000, 00:19:43.659 "arbitration_burst": 0, 00:19:43.659 "low_priority_weight": 0, 00:19:43.659 "medium_priority_weight": 0, 00:19:43.659 "high_priority_weight": 0, 00:19:43.659 "nvme_adminq_poll_period_us": 10000, 00:19:43.659 "nvme_ioq_poll_period_us": 0, 00:19:43.659 "io_queue_requests": 512, 00:19:43.659 "delay_cmd_submit": true, 00:19:43.659 "transport_retry_count": 4, 00:19:43.659 "bdev_retry_count": 3, 00:19:43.659 "transport_ack_timeout": 0, 00:19:43.659 "ctrlr_loss_timeout_sec": 0, 00:19:43.659 "reconnect_delay_sec": 0, 00:19:43.659 "fast_io_fail_timeout_sec": 0, 00:19:43.659 "disable_auto_failback": false, 00:19:43.659 "generate_uuids": false, 00:19:43.659 "transport_tos": 0, 00:19:43.659 "nvme_error_stat": false, 00:19:43.659 "rdma_srq_size": 0, 00:19:43.659 "io_path_stat": false, 00:19:43.659 "allow_accel_sequence": false, 00:19:43.659 "rdma_max_cq_size": 0, 00:19:43.659 "rdma_cm_event_timeout_ms": 0, 00:19:43.659 "dhchap_digests": [ 00:19:43.659 "sha256", 00:19:43.659 "sha384", 00:19:43.659 "sha512" 00:19:43.659 ], 00:19:43.659 "dhchap_dhgroups": [ 00:19:43.659 "null", 00:19:43.659 "ffdhe2048", 00:19:43.659 "ffdhe3072", 00:19:43.659 "ffdhe4096", 00:19:43.659 "ffdhe6144", 00:19:43.659 "ffdhe8192" 00:19:43.659 ] 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_nvme_attach_controller", 00:19:43.659 "params": { 00:19:43.659 "name": "nvme0", 00:19:43.659 "trtype": "TCP", 00:19:43.659 "adrfam": "IPv4", 00:19:43.659 "traddr": "10.0.0.2", 00:19:43.659 "trsvcid": "4420", 00:19:43.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.659 "prchk_reftag": false, 00:19:43.659 "prchk_guard": false, 00:19:43.659 "ctrlr_loss_timeout_sec": 0, 00:19:43.659 "reconnect_delay_sec": 0, 00:19:43.659 "fast_io_fail_timeout_sec": 0, 00:19:43.659 "psk": "key0", 00:19:43.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.659 "hdgst": false, 00:19:43.659 "ddgst": false 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_nvme_set_hotplug", 00:19:43.659 "params": { 00:19:43.659 "period_us": 100000, 00:19:43.659 "enable": false 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_enable_histogram", 00:19:43.659 "params": { 00:19:43.659 "name": "nvme0n1", 00:19:43.659 "enable": true 00:19:43.659 } 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "method": "bdev_wait_for_examine" 00:19:43.659 } 00:19:43.659 ] 00:19:43.659 }, 00:19:43.659 { 00:19:43.659 "subsystem": "nbd", 00:19:43.659 "config": [] 00:19:43.659 } 00:19:43.659 ] 00:19:43.659 }' 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3731078 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3731078 ']' 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3731078 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.659 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731078 00:19:43.918 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:43.918 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:43.918 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731078' 00:19:43.918 killing process with pid 3731078 00:19:43.918 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3731078 00:19:43.918 Received shutdown signal, test time was about 1.000000 seconds 00:19:43.918 00:19:43.918 Latency(us) 00:19:43.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.918 =================================================================================================================== 00:19:43.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.918 21:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3731078 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3730830 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3730830 ']' 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3730830 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3730830 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3730830' 00:19:43.918 killing process with pid 3730830 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3730830 00:19:43.918 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3730830 00:19:44.178 21:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:44.178 21:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.178 21:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:44.178 "subsystems": [ 00:19:44.178 { 00:19:44.178 "subsystem": "keyring", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "keyring_file_add_key", 00:19:44.178 "params": { 00:19:44.178 "name": "key0", 00:19:44.178 "path": "/tmp/tmp.V9RK6Bs3Ls" 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "iobuf", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "iobuf_set_options", 00:19:44.178 "params": { 00:19:44.178 "small_pool_count": 8192, 00:19:44.178 "large_pool_count": 1024, 00:19:44.178 "small_bufsize": 8192, 00:19:44.178 "large_bufsize": 135168 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "sock", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "sock_set_default_impl", 00:19:44.178 "params": { 00:19:44.178 "impl_name": "posix" 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "sock_impl_set_options", 00:19:44.178 "params": { 00:19:44.178 "impl_name": "ssl", 00:19:44.178 "recv_buf_size": 4096, 00:19:44.178 "send_buf_size": 4096, 00:19:44.178 "enable_recv_pipe": true, 00:19:44.178 "enable_quickack": false, 00:19:44.178 "enable_placement_id": 0, 00:19:44.178 "enable_zerocopy_send_server": true, 00:19:44.178 "enable_zerocopy_send_client": false, 00:19:44.178 "zerocopy_threshold": 0, 00:19:44.178 "tls_version": 0, 00:19:44.178 "enable_ktls": false 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "sock_impl_set_options", 00:19:44.178 "params": { 00:19:44.178 "impl_name": "posix", 00:19:44.178 "recv_buf_size": 2097152, 00:19:44.178 "send_buf_size": 2097152, 00:19:44.178 "enable_recv_pipe": true, 00:19:44.178 "enable_quickack": false, 00:19:44.178 "enable_placement_id": 0, 00:19:44.178 "enable_zerocopy_send_server": true, 00:19:44.178 "enable_zerocopy_send_client": false, 00:19:44.178 "zerocopy_threshold": 0, 00:19:44.178 "tls_version": 0, 00:19:44.178 "enable_ktls": false 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "vmd", 00:19:44.178 "config": [] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "accel", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "accel_set_options", 00:19:44.178 "params": { 00:19:44.178 "small_cache_size": 128, 00:19:44.178 "large_cache_size": 16, 00:19:44.178 "task_count": 2048, 00:19:44.178 "sequence_count": 2048, 00:19:44.178 "buf_count": 2048 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "bdev", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "bdev_set_options", 00:19:44.178 "params": { 00:19:44.178 "bdev_io_pool_size": 65535, 00:19:44.178 "bdev_io_cache_size": 256, 00:19:44.178 "bdev_auto_examine": true, 00:19:44.178 "iobuf_small_cache_size": 128, 00:19:44.178 "iobuf_large_cache_size": 16 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_raid_set_options", 00:19:44.178 "params": { 00:19:44.178 "process_window_size_kb": 1024 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_iscsi_set_options", 00:19:44.178 "params": { 00:19:44.178 "timeout_sec": 30 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_nvme_set_options", 00:19:44.178 "params": { 00:19:44.178 "action_on_timeout": "none", 00:19:44.178 "timeout_us": 0, 00:19:44.178 "timeout_admin_us": 0, 00:19:44.178 "keep_alive_timeout_ms": 10000, 00:19:44.178 "arbitration_burst": 0, 00:19:44.178 "low_priority_weight": 0, 00:19:44.178 "medium_priority_weight": 0, 00:19:44.178 "high_priority_weight": 0, 00:19:44.178 "nvme_adminq_poll_period_us": 10000, 00:19:44.178 "nvme_ioq_poll_period_us": 0, 00:19:44.178 "io_queue_requests": 0, 00:19:44.178 "delay_cmd_submit": true, 00:19:44.178 "transport_retry_count": 4, 00:19:44.178 "bdev_retry_count": 3, 00:19:44.178 "transport_ack_timeout": 0, 00:19:44.178 "ctrlr_loss_timeout_sec": 0, 00:19:44.178 "reconnect_delay_sec": 0, 00:19:44.178 "fast_io_fail_timeout_sec": 0, 00:19:44.178 "disable_auto_failback": false, 00:19:44.178 "generate_uuids": false, 00:19:44.178 "transport_tos": 0, 00:19:44.178 "nvme_error_stat": false, 00:19:44.178 "rdma_srq_size": 0, 00:19:44.178 "io_path_stat": false, 00:19:44.178 "allow_accel_sequence": false, 00:19:44.178 "rdma_max_cq_size": 0, 00:19:44.178 "rdma_cm_event_timeout_ms": 0, 00:19:44.178 "dhchap_digests": [ 00:19:44.178 "sha256", 00:19:44.178 "sha384", 00:19:44.178 "sha512" 00:19:44.178 ], 00:19:44.178 "dhchap_dhgroups": [ 00:19:44.178 "null", 00:19:44.178 "ffdhe2048", 00:19:44.178 "ffdhe3072", 00:19:44.178 "ffdhe4096", 00:19:44.178 "ffdhe6144", 00:19:44.178 "ffdhe8192" 00:19:44.178 ] 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_nvme_set_hotplug", 00:19:44.178 "params": { 00:19:44.178 "period_us": 100000, 00:19:44.178 "enable": false 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_malloc_create", 00:19:44.178 "params": { 00:19:44.178 "name": "malloc0", 00:19:44.178 "num_blocks": 8192, 00:19:44.178 "block_size": 4096, 00:19:44.178 "physical_block_size": 4096, 00:19:44.178 "uuid": "b7861584-f753-47cc-a672-3dc708e04420", 00:19:44.178 "optimal_io_boundary": 0 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "bdev_wait_for_examine" 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "nbd", 00:19:44.178 "config": [] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "scheduler", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "framework_set_scheduler", 00:19:44.178 "params": { 00:19:44.178 "name": "static" 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ] 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "subsystem": "nvmf", 00:19:44.178 "config": [ 00:19:44.178 { 00:19:44.178 "method": "nvmf_set_config", 00:19:44.178 "params": { 00:19:44.178 "discovery_filter": "match_any", 00:19:44.178 "admin_cmd_passthru": { 00:19:44.178 "identify_ctrlr": false 00:19:44.178 } 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "nvmf_set_max_subsystems", 00:19:44.178 "params": { 00:19:44.178 "max_subsystems": 1024 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "nvmf_set_crdt", 00:19:44.178 "params": { 00:19:44.178 "crdt1": 0, 00:19:44.178 "crdt2": 0, 00:19:44.178 "crdt3": 0 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "nvmf_create_transport", 00:19:44.178 "params": { 00:19:44.178 "trtype": "TCP", 00:19:44.178 "max_queue_depth": 128, 00:19:44.178 "max_io_qpairs_per_ctrlr": 127, 00:19:44.178 "in_capsule_data_size": 4096, 00:19:44.178 "max_io_size": 131072, 00:19:44.178 "io_unit_size": 131072, 00:19:44.178 "max_aq_depth": 128, 00:19:44.178 "num_shared_buffers": 511, 00:19:44.178 "buf_cache_size": 4294967295, 00:19:44.178 "dif_insert_or_strip": false, 00:19:44.178 "zcopy": false, 00:19:44.178 "c2h_success": false, 00:19:44.178 "sock_priority": 0, 00:19:44.178 "abort_timeout_sec": 1, 00:19:44.178 "ack_timeout": 0, 00:19:44.178 "data_wr_pool_size": 0 00:19:44.178 } 00:19:44.178 }, 00:19:44.178 { 00:19:44.178 "method": "nvmf_create_subsystem", 00:19:44.178 "params": { 00:19:44.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.178 00:19:44.178 "allow_any_host": false, 00:19:44.178 "serial_number": "00000000000000000000", 00:19:44.178 "model_number": "SPDK bdev Controller", 00:19:44.178 "max_namespaces": 32, 00:19:44.178 "min_cntlid": 1, 00:19:44.178 "max_cntlid": 65519, 00:19:44.178 "ana_reporting": false 00:19:44.178 } 00:19:44.178 }, 00:19:44.179 { 00:19:44.179 "method": "nvmf_subsystem_add_host", 00:19:44.179 "params": { 00:19:44.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.179 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.179 "psk": "key0" 00:19:44.179 } 00:19:44.179 }, 00:19:44.179 { 00:19:44.179 "method": "nvmf_subsystem_add_ns", 00:19:44.179 "params": { 00:19:44.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.179 "namespace": { 00:19:44.179 "nsid": 1, 00:19:44.179 "bdev_name": "malloc0", 00:19:44.179 "nguid": "B7861584F75347CCA6723DC708E04420", 00:19:44.179 "uuid": "b7861584-f753-47cc-a672-3dc708e04420", 00:19:44.179 "no_auto_visible": false 00:19:44.179 } 00:19:44.179 } 00:19:44.179 }, 00:19:44.179 { 00:19:44.179 "method": "nvmf_subsystem_add_listener", 00:19:44.179 "params": { 00:19:44.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.179 "listen_address": { 00:19:44.179 "trtype": "TCP", 00:19:44.179 "adrfam": "IPv4", 00:19:44.179 "traddr": "10.0.0.2", 00:19:44.179 "trsvcid": "4420" 00:19:44.179 }, 00:19:44.179 "secure_channel": false, 00:19:44.179 "sock_impl": "ssl" 00:19:44.179 } 00:19:44.179 } 00:19:44.179 ] 00:19:44.179 } 00:19:44.179 ] 00:19:44.179 }' 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3731567 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3731567 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3731567 ']' 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.179 21:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.179 [2024-07-15 21:57:38.390510] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:44.179 [2024-07-15 21:57:38.390556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.179 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.437 [2024-07-15 21:57:38.445829] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.437 [2024-07-15 21:57:38.517063] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.437 [2024-07-15 21:57:38.517105] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.437 [2024-07-15 21:57:38.517112] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.437 [2024-07-15 21:57:38.517118] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.437 [2024-07-15 21:57:38.517142] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.437 [2024-07-15 21:57:38.517198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.695 [2024-07-15 21:57:38.726895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.695 [2024-07-15 21:57:38.758929] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.695 [2024-07-15 21:57:38.767523] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.953 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.953 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:44.953 21:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.953 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.953 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.212 21:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.212 21:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3731804 00:19:45.212 21:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3731804 /var/tmp/bdevperf.sock 00:19:45.212 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3731804 ']' 00:19:45.212 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:45.213 "subsystems": [ 00:19:45.213 { 00:19:45.213 "subsystem": "keyring", 00:19:45.213 "config": [ 00:19:45.213 { 00:19:45.213 "method": "keyring_file_add_key", 00:19:45.213 "params": { 00:19:45.213 "name": "key0", 00:19:45.213 "path": "/tmp/tmp.V9RK6Bs3Ls" 00:19:45.213 } 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "iobuf", 00:19:45.213 "config": [ 00:19:45.213 { 00:19:45.213 "method": "iobuf_set_options", 00:19:45.213 "params": { 00:19:45.213 "small_pool_count": 8192, 00:19:45.213 "large_pool_count": 1024, 00:19:45.213 "small_bufsize": 8192, 00:19:45.213 "large_bufsize": 135168 00:19:45.213 } 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "sock", 00:19:45.213 "config": [ 00:19:45.213 { 00:19:45.213 "method": "sock_set_default_impl", 00:19:45.213 "params": { 00:19:45.213 "impl_name": "posix" 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "sock_impl_set_options", 00:19:45.213 "params": { 00:19:45.213 "impl_name": "ssl", 00:19:45.213 "recv_buf_size": 4096, 00:19:45.213 "send_buf_size": 4096, 00:19:45.213 "enable_recv_pipe": true, 00:19:45.213 "enable_quickack": false, 00:19:45.213 "enable_placement_id": 0, 00:19:45.213 "enable_zerocopy_send_server": true, 00:19:45.213 "enable_zerocopy_send_client": false, 00:19:45.213 "zerocopy_threshold": 0, 00:19:45.213 "tls_version": 0, 00:19:45.213 "enable_ktls": false 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "sock_impl_set_options", 00:19:45.213 "params": { 00:19:45.213 "impl_name": "posix", 00:19:45.213 "recv_buf_size": 2097152, 00:19:45.213 "send_buf_size": 2097152, 00:19:45.213 "enable_recv_pipe": true, 00:19:45.213 "enable_quickack": false, 00:19:45.213 "enable_placement_id": 0, 00:19:45.213 "enable_zerocopy_send_server": true, 00:19:45.213 "enable_zerocopy_send_client": false, 00:19:45.213 "zerocopy_threshold": 0, 00:19:45.213 "tls_version": 0, 00:19:45.213 "enable_ktls": false 00:19:45.213 } 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "vmd", 00:19:45.213 "config": [] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "accel", 00:19:45.213 "config": [ 00:19:45.213 { 00:19:45.213 "method": "accel_set_options", 00:19:45.213 "params": { 00:19:45.213 "small_cache_size": 128, 00:19:45.213 "large_cache_size": 16, 00:19:45.213 "task_count": 2048, 00:19:45.213 "sequence_count": 2048, 00:19:45.213 "buf_count": 2048 00:19:45.213 } 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "bdev", 00:19:45.213 "config": [ 00:19:45.213 { 00:19:45.213 "method": "bdev_set_options", 00:19:45.213 "params": { 00:19:45.213 "bdev_io_pool_size": 65535, 00:19:45.213 "bdev_io_cache_size": 256, 00:19:45.213 "bdev_auto_examine": true, 00:19:45.213 "iobuf_small_cache_size": 128, 00:19:45.213 "iobuf_large_cache_size": 16 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_raid_set_options", 00:19:45.213 "params": { 00:19:45.213 "process_window_size_kb": 1024 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_iscsi_set_options", 00:19:45.213 "params": { 00:19:45.213 "timeout_sec": 30 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_nvme_set_options", 00:19:45.213 "params": { 00:19:45.213 "action_on_timeout": "none", 00:19:45.213 "timeout_us": 0, 00:19:45.213 "timeout_admin_us": 0, 00:19:45.213 "keep_alive_timeout_ms": 10000, 00:19:45.213 "arbitration_burst": 0, 00:19:45.213 "low_priority_weight": 0, 00:19:45.213 "medium_priority_weight": 0, 00:19:45.213 "high_priority_weight": 0, 00:19:45.213 "nvme_adminq_poll_period_us": 10000, 00:19:45.213 "nvme_ioq_poll_period_us": 0, 00:19:45.213 "io_queue_requests": 512, 00:19:45.213 "delay_cmd_submit": true, 00:19:45.213 "transport_retry_count": 4, 00:19:45.213 "bdev_retry_count": 3, 00:19:45.213 "transport_ack_timeout": 0, 00:19:45.213 "ctrlr_loss_timeout_sec": 0, 00:19:45.213 "reconnect_delay_sec": 0, 00:19:45.213 "fast_io_fail_timeout_sec": 0, 00:19:45.213 "disable_auto_failback": false, 00:19:45.213 "generate_uuids": false, 00:19:45.213 "transport_tos": 0, 00:19:45.213 "nvme_error_stat": false, 00:19:45.213 "rdma_srq_size": 0, 00:19:45.213 "io_path_stat": false, 00:19:45.213 "allow_accel_sequence": false, 00:19:45.213 "rdma_max_cq_size": 0, 00:19:45.213 "rdma_cm_event_timeout_ms": 0, 00:19:45.213 "dhchap_digests": [ 00:19:45.213 "sha256", 00:19:45.213 "sha384", 00:19:45.213 "sha512" 00:19:45.213 ], 00:19:45.213 "dhchap_dhgroups": [ 00:19:45.213 "null", 00:19:45.213 "ffdhe2048", 00:19:45.213 "ffdhe3072", 00:19:45.213 "ffdhe4096", 00:19:45.213 "ffdhe6144", 00:19:45.213 "ffdhe8192" 00:19:45.213 ] 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_nvme_attach_controller", 00:19:45.213 "params": { 00:19:45.213 "name": "nvme0", 00:19:45.213 "trtype": "TCP", 00:19:45.213 "adrfam": "IPv4", 00:19:45.213 "traddr": "10.0.0.2", 00:19:45.213 "trsvcid": "4420", 00:19:45.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.213 "prchk_reftag": false, 00:19:45.213 "prchk_guard": false, 00:19:45.213 "ctrlr_loss_timeout_sec": 0, 00:19:45.213 "reconnect_delay_sec": 0, 00:19:45.213 "fast_io_fail_timeout_sec": 0, 00:19:45.213 "psk": "key0", 00:19:45.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.213 "hdgst": false, 00:19:45.213 "ddgst": false 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_nvme_set_hotplug", 00:19:45.213 "params": { 00:19:45.213 "period_us": 100000, 00:19:45.213 "enable": false 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_enable_histogram", 00:19:45.213 "params": { 00:19:45.213 "name": "nvme0n1", 00:19:45.213 "enable": true 00:19:45.213 } 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "method": "bdev_wait_for_examine" 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }, 00:19:45.213 { 00:19:45.213 "subsystem": "nbd", 00:19:45.213 "config": [] 00:19:45.213 } 00:19:45.213 ] 00:19:45.213 }' 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.213 21:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.213 [2024-07-15 21:57:39.270076] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:45.213 [2024-07-15 21:57:39.270122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731804 ] 00:19:45.213 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.213 [2024-07-15 21:57:39.324637] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.213 [2024-07-15 21:57:39.397529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.472 [2024-07-15 21:57:39.548959] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.081 21:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.081 21:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.081 21:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:46.082 21:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:46.082 21:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.082 21:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.340 Running I/O for 1 seconds... 00:19:47.276 00:19:47.276 Latency(us) 00:19:47.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.276 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:47.276 Verification LBA range: start 0x0 length 0x2000 00:19:47.276 nvme0n1 : 1.03 3949.03 15.43 0.00 0.00 32013.53 4616.01 54936.26 00:19:47.276 =================================================================================================================== 00:19:47.276 Total : 3949.03 15.43 0.00 0.00 32013.53 4616.01 54936.26 00:19:47.276 0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:47.276 nvmf_trace.0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3731804 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3731804 ']' 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3731804 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731804 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731804' 00:19:47.276 killing process with pid 3731804 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3731804 00:19:47.276 Received shutdown signal, test time was about 1.000000 seconds 00:19:47.276 00:19:47.276 Latency(us) 00:19:47.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.276 =================================================================================================================== 00:19:47.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.276 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3731804 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.535 rmmod nvme_tcp 00:19:47.535 rmmod nvme_fabrics 00:19:47.535 rmmod nvme_keyring 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3731567 ']' 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3731567 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3731567 ']' 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3731567 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.535 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731567 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731567' 00:19:47.793 killing process with pid 3731567 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3731567 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3731567 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.793 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.794 21:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.794 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.794 21:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.326 21:57:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.326 21:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EuqvBQI1zY /tmp/tmp.aprbZcxj3Z /tmp/tmp.V9RK6Bs3Ls 00:19:50.326 00:19:50.326 real 1m24.540s 00:19:50.326 user 2m11.760s 00:19:50.326 sys 0m27.748s 00:19:50.326 21:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.326 21:57:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.326 ************************************ 00:19:50.326 END TEST nvmf_tls 00:19:50.326 ************************************ 00:19:50.326 21:57:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:50.326 21:57:44 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:50.326 21:57:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:50.326 21:57:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.326 21:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:50.326 ************************************ 00:19:50.326 START TEST nvmf_fips 00:19:50.326 ************************************ 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:50.326 * Looking for test storage... 00:19:50.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.326 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:50.327 Error setting digest 00:19:50.327 00523038157F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:50.327 00523038157F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.327 21:57:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:55.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:55.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:55.601 Found net devices under 0000:86:00.0: cvl_0_0 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:55.601 Found net devices under 0000:86:00.1: cvl_0_1 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:19:55.601 00:19:55.601 --- 10.0.0.2 ping statistics --- 00:19:55.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.601 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:19:55.601 00:19:55.601 --- 10.0.0.1 ping statistics --- 00:19:55.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.601 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:55.601 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3735606 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3735606 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3735606 ']' 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.602 21:57:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:55.860 [2024-07-15 21:57:49.885935] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:55.860 [2024-07-15 21:57:49.885981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.860 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.860 [2024-07-15 21:57:49.943154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.861 [2024-07-15 21:57:50.026793] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.861 [2024-07-15 21:57:50.026828] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.861 [2024-07-15 21:57:50.026835] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.861 [2024-07-15 21:57:50.026841] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.861 [2024-07-15 21:57:50.026847] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.861 [2024-07-15 21:57:50.026863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.428 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.428 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:56.428 21:57:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.428 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.428 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:56.687 [2024-07-15 21:57:50.859887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.687 [2024-07-15 21:57:50.875896] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.687 [2024-07-15 21:57:50.876047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.687 [2024-07-15 21:57:50.904067] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:56.687 malloc0 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3735853 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3735853 /var/tmp/bdevperf.sock 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3735853 ']' 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.687 21:57:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:56.946 [2024-07-15 21:57:50.978253] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:19:56.946 [2024-07-15 21:57:50.978302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735853 ] 00:19:56.946 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.946 [2024-07-15 21:57:51.030096] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.946 [2024-07-15 21:57:51.102312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.881 21:57:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.881 21:57:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:57.881 21:57:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:57.881 [2024-07-15 21:57:51.933051] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.881 [2024-07-15 21:57:51.933133] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:57.881 TLSTESTn1 00:19:57.881 21:57:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.881 Running I/O for 10 seconds... 00:20:10.085 00:20:10.085 Latency(us) 00:20:10.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.085 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.085 Verification LBA range: start 0x0 length 0x2000 00:20:10.085 TLSTESTn1 : 10.03 2805.19 10.96 0.00 0.00 45562.09 5071.92 75679.83 00:20:10.085 =================================================================================================================== 00:20:10.085 Total : 2805.19 10.96 0.00 0.00 45562.09 5071.92 75679.83 00:20:10.085 0 00:20:10.085 21:58:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:10.085 21:58:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:10.086 nvmf_trace.0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3735853 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3735853 ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3735853 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3735853 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3735853' 00:20:10.086 killing process with pid 3735853 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3735853 00:20:10.086 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.086 00:20:10.086 Latency(us) 00:20:10.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.086 =================================================================================================================== 00:20:10.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.086 [2024-07-15 21:58:02.295713] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3735853 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.086 rmmod nvme_tcp 00:20:10.086 rmmod nvme_fabrics 00:20:10.086 rmmod nvme_keyring 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3735606 ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3735606 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3735606 ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3735606 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3735606 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3735606' 00:20:10.086 killing process with pid 3735606 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3735606 00:20:10.086 [2024-07-15 21:58:02.584117] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3735606 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.086 21:58:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.654 21:58:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.654 21:58:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:10.654 00:20:10.654 real 0m20.743s 00:20:10.654 user 0m22.242s 00:20:10.654 sys 0m9.320s 00:20:10.654 21:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.654 21:58:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:10.654 ************************************ 00:20:10.654 END TEST nvmf_fips 00:20:10.654 ************************************ 00:20:10.654 21:58:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.654 21:58:04 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:10.654 21:58:04 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:10.654 21:58:04 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:10.654 21:58:04 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:10.654 21:58:04 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.654 21:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:15.960 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:15.960 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:15.960 Found net devices under 0000:86:00.0: cvl_0_0 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:15.960 Found net devices under 0000:86:00.1: cvl_0_1 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:15.960 21:58:09 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:15.960 21:58:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.960 21:58:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.960 21:58:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.960 ************************************ 00:20:15.960 START TEST nvmf_perf_adq 00:20:15.960 ************************************ 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:15.960 * Looking for test storage... 00:20:15.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.960 21:58:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:20.148 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:20.148 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:20.148 Found net devices under 0000:86:00.0: cvl_0_0 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:20.148 Found net devices under 0000:86:00.1: cvl_0_1 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:20.148 21:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:21.525 21:58:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:23.430 21:58:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.702 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.703 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.703 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:20:28.703 00:20:28.703 --- 10.0.0.2 ping statistics --- 00:20:28.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.703 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:20:28.703 00:20:28.703 --- 10.0.0.1 ping statistics --- 00:20:28.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.703 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3745376 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3745376 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3745376 ']' 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.703 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.704 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.704 21:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.704 21:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:28.704 [2024-07-15 21:58:22.653070] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:20:28.704 [2024-07-15 21:58:22.653111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.704 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.704 [2024-07-15 21:58:22.710756] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.704 [2024-07-15 21:58:22.792671] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.704 [2024-07-15 21:58:22.792706] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.704 [2024-07-15 21:58:22.792713] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.704 [2024-07-15 21:58:22.792720] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.704 [2024-07-15 21:58:22.792725] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.704 [2024-07-15 21:58:22.792770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.704 [2024-07-15 21:58:22.792786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.704 [2024-07-15 21:58:22.792874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.704 [2024-07-15 21:58:22.792876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.271 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 [2024-07-15 21:58:23.651029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 Malloc1 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 [2024-07-15 21:58:23.702607] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3745571 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:29.529 21:58:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:29.529 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.062 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:32.062 21:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.062 21:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.062 21:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.062 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:32.062 "tick_rate": 2300000000, 00:20:32.062 "poll_groups": [ 00:20:32.062 { 00:20:32.062 "name": "nvmf_tgt_poll_group_000", 00:20:32.062 "admin_qpairs": 1, 00:20:32.062 "io_qpairs": 1, 00:20:32.062 "current_admin_qpairs": 1, 00:20:32.062 "current_io_qpairs": 1, 00:20:32.062 "pending_bdev_io": 0, 00:20:32.062 "completed_nvme_io": 19741, 00:20:32.062 "transports": [ 00:20:32.062 { 00:20:32.062 "trtype": "TCP" 00:20:32.062 } 00:20:32.062 ] 00:20:32.062 }, 00:20:32.062 { 00:20:32.062 "name": "nvmf_tgt_poll_group_001", 00:20:32.062 "admin_qpairs": 0, 00:20:32.062 "io_qpairs": 1, 00:20:32.062 "current_admin_qpairs": 0, 00:20:32.062 "current_io_qpairs": 1, 00:20:32.062 "pending_bdev_io": 0, 00:20:32.062 "completed_nvme_io": 19970, 00:20:32.062 "transports": [ 00:20:32.062 { 00:20:32.062 "trtype": "TCP" 00:20:32.062 } 00:20:32.062 ] 00:20:32.062 }, 00:20:32.062 { 00:20:32.062 "name": "nvmf_tgt_poll_group_002", 00:20:32.062 "admin_qpairs": 0, 00:20:32.062 "io_qpairs": 1, 00:20:32.062 "current_admin_qpairs": 0, 00:20:32.062 "current_io_qpairs": 1, 00:20:32.062 "pending_bdev_io": 0, 00:20:32.062 "completed_nvme_io": 19881, 00:20:32.062 "transports": [ 00:20:32.062 { 00:20:32.062 "trtype": "TCP" 00:20:32.062 } 00:20:32.062 ] 00:20:32.063 }, 00:20:32.063 { 00:20:32.063 "name": "nvmf_tgt_poll_group_003", 00:20:32.063 "admin_qpairs": 0, 00:20:32.063 "io_qpairs": 1, 00:20:32.063 "current_admin_qpairs": 0, 00:20:32.063 "current_io_qpairs": 1, 00:20:32.063 "pending_bdev_io": 0, 00:20:32.063 "completed_nvme_io": 19499, 00:20:32.063 "transports": [ 00:20:32.063 { 00:20:32.063 "trtype": "TCP" 00:20:32.063 } 00:20:32.063 ] 00:20:32.063 } 00:20:32.063 ] 00:20:32.063 }' 00:20:32.063 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:32.063 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:32.063 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:32.063 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:32.063 21:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3745571 00:20:40.180 Initializing NVMe Controllers 00:20:40.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:40.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:40.180 Initialization complete. Launching workers. 00:20:40.180 ======================================================== 00:20:40.180 Latency(us) 00:20:40.180 Device Information : IOPS MiB/s Average min max 00:20:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10369.48 40.51 6191.82 2160.72 44790.80 00:20:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10605.47 41.43 6034.57 2302.38 10322.60 00:20:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10559.37 41.25 6060.05 2030.11 9869.34 00:20:40.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10481.97 40.95 6104.94 1188.37 9718.45 00:20:40.180 ======================================================== 00:20:40.180 Total : 42016.28 164.13 6097.34 1188.37 44790.80 00:20:40.180 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.180 rmmod nvme_tcp 00:20:40.180 rmmod nvme_fabrics 00:20:40.180 rmmod nvme_keyring 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3745376 ']' 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3745376 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3745376 ']' 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3745376 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.180 21:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745376 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745376' 00:20:40.180 killing process with pid 3745376 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3745376 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3745376 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.180 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.181 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.181 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.181 21:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.181 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.181 21:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.106 21:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.106 21:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:42.106 21:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:43.531 21:58:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:45.432 21:58:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.703 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.703 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.703 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:20:50.703 00:20:50.703 --- 10.0.0.2 ping statistics --- 00:20:50.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.703 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:20:50.704 00:20:50.704 --- 10.0.0.1 ping statistics --- 00:20:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.704 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:50.704 net.core.busy_poll = 1 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:50.704 net.core.busy_read = 1 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:50.704 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3749349 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3749349 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3749349 ']' 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:50.963 21:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.963 [2024-07-15 21:58:45.010422] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:20:50.963 [2024-07-15 21:58:45.010469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.963 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.963 [2024-07-15 21:58:45.067017] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.963 [2024-07-15 21:58:45.151907] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.963 [2024-07-15 21:58:45.151942] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.963 [2024-07-15 21:58:45.151949] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.963 [2024-07-15 21:58:45.151955] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.963 [2024-07-15 21:58:45.151960] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.963 [2024-07-15 21:58:45.151998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.963 [2024-07-15 21:58:45.152095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.963 [2024-07-15 21:58:45.152170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.963 [2024-07-15 21:58:45.152171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 [2024-07-15 21:58:45.996082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 Malloc1 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.898 [2024-07-15 21:58:46.043932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3749598 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:51.898 21:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:51.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:54.429 "tick_rate": 2300000000, 00:20:54.429 "poll_groups": [ 00:20:54.429 { 00:20:54.429 "name": "nvmf_tgt_poll_group_000", 00:20:54.429 "admin_qpairs": 1, 00:20:54.429 "io_qpairs": 3, 00:20:54.429 "current_admin_qpairs": 1, 00:20:54.429 "current_io_qpairs": 3, 00:20:54.429 "pending_bdev_io": 0, 00:20:54.429 "completed_nvme_io": 29438, 00:20:54.429 "transports": [ 00:20:54.429 { 00:20:54.429 "trtype": "TCP" 00:20:54.429 } 00:20:54.429 ] 00:20:54.429 }, 00:20:54.429 { 00:20:54.429 "name": "nvmf_tgt_poll_group_001", 00:20:54.429 "admin_qpairs": 0, 00:20:54.429 "io_qpairs": 1, 00:20:54.429 "current_admin_qpairs": 0, 00:20:54.429 "current_io_qpairs": 1, 00:20:54.429 "pending_bdev_io": 0, 00:20:54.429 "completed_nvme_io": 27300, 00:20:54.429 "transports": [ 00:20:54.429 { 00:20:54.429 "trtype": "TCP" 00:20:54.429 } 00:20:54.429 ] 00:20:54.429 }, 00:20:54.429 { 00:20:54.429 "name": "nvmf_tgt_poll_group_002", 00:20:54.429 "admin_qpairs": 0, 00:20:54.429 "io_qpairs": 0, 00:20:54.429 "current_admin_qpairs": 0, 00:20:54.429 "current_io_qpairs": 0, 00:20:54.429 "pending_bdev_io": 0, 00:20:54.429 "completed_nvme_io": 0, 00:20:54.429 "transports": [ 00:20:54.429 { 00:20:54.429 "trtype": "TCP" 00:20:54.429 } 00:20:54.429 ] 00:20:54.429 }, 00:20:54.429 { 00:20:54.429 "name": "nvmf_tgt_poll_group_003", 00:20:54.429 "admin_qpairs": 0, 00:20:54.429 "io_qpairs": 0, 00:20:54.429 "current_admin_qpairs": 0, 00:20:54.429 "current_io_qpairs": 0, 00:20:54.429 "pending_bdev_io": 0, 00:20:54.429 "completed_nvme_io": 0, 00:20:54.429 "transports": [ 00:20:54.429 { 00:20:54.429 "trtype": "TCP" 00:20:54.429 } 00:20:54.429 ] 00:20:54.429 } 00:20:54.429 ] 00:20:54.429 }' 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:54.429 21:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3749598 00:21:02.541 Initializing NVMe Controllers 00:21:02.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:02.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:02.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:02.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:02.541 Initialization complete. Launching workers. 00:21:02.541 ======================================================== 00:21:02.541 Latency(us) 00:21:02.541 Device Information : IOPS MiB/s Average min max 00:21:02.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4982.80 19.46 12851.17 1606.06 59300.69 00:21:02.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14475.40 56.54 4434.81 1360.35 44528.87 00:21:02.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4728.40 18.47 13543.20 1824.83 59115.71 00:21:02.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5839.10 22.81 10966.45 1491.76 58935.29 00:21:02.541 ======================================================== 00:21:02.541 Total : 30025.69 117.29 8536.10 1360.35 59300.69 00:21:02.541 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.541 rmmod nvme_tcp 00:21:02.541 rmmod nvme_fabrics 00:21:02.541 rmmod nvme_keyring 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3749349 ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3749349 ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3749349' 00:21:02.541 killing process with pid 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3749349 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.541 21:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.828 21:58:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.828 21:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:05.828 00:21:05.828 real 0m49.823s 00:21:05.828 user 2m49.285s 00:21:05.828 sys 0m8.811s 00:21:05.828 21:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.828 21:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.828 ************************************ 00:21:05.828 END TEST nvmf_perf_adq 00:21:05.828 ************************************ 00:21:05.828 21:58:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:05.828 21:58:59 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:05.828 21:58:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:05.828 21:58:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.828 21:58:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:05.828 ************************************ 00:21:05.828 START TEST nvmf_shutdown 00:21:05.828 ************************************ 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:05.828 * Looking for test storage... 00:21:05.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.828 21:58:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.829 ************************************ 00:21:05.829 START TEST nvmf_shutdown_tc1 00:21:05.829 ************************************ 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.829 21:58:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.102 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.103 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.103 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:21:11.103 00:21:11.103 --- 10.0.0.2 ping statistics --- 00:21:11.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.103 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:11.103 00:21:11.103 --- 10.0.0.1 ping statistics --- 00:21:11.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.103 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.103 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3754998 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3754998 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3754998 ']' 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.395 21:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.395 [2024-07-15 21:59:05.401684] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:11.395 [2024-07-15 21:59:05.401728] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.395 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.395 [2024-07-15 21:59:05.460366] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.395 [2024-07-15 21:59:05.533721] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.395 [2024-07-15 21:59:05.533762] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.395 [2024-07-15 21:59:05.533769] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.395 [2024-07-15 21:59:05.533774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.395 [2024-07-15 21:59:05.533779] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.395 [2024-07-15 21:59:05.533886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.395 [2024-07-15 21:59:05.533973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.395 [2024-07-15 21:59:05.534064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.395 [2024-07-15 21:59:05.534065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.967 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.967 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:11.967 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.967 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.967 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.227 [2024-07-15 21:59:06.242188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.227 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.227 Malloc1 00:21:12.227 [2024-07-15 21:59:06.338027] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.227 Malloc2 00:21:12.227 Malloc3 00:21:12.227 Malloc4 00:21:12.486 Malloc5 00:21:12.486 Malloc6 00:21:12.486 Malloc7 00:21:12.486 Malloc8 00:21:12.486 Malloc9 00:21:12.486 Malloc10 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3755284 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3755284 /var/tmp/bdevperf.sock 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3755284 ']' 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.746 { 00:21:12.746 "params": { 00:21:12.746 "name": "Nvme$subsystem", 00:21:12.746 "trtype": "$TEST_TRANSPORT", 00:21:12.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.746 "adrfam": "ipv4", 00:21:12.746 "trsvcid": "$NVMF_PORT", 00:21:12.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.746 "hdgst": ${hdgst:-false}, 00:21:12.746 "ddgst": ${ddgst:-false} 00:21:12.746 }, 00:21:12.746 "method": "bdev_nvme_attach_controller" 00:21:12.746 } 00:21:12.746 EOF 00:21:12.746 )") 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.746 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.747 { 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme$subsystem", 00:21:12.747 "trtype": "$TEST_TRANSPORT", 00:21:12.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "$NVMF_PORT", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.747 "hdgst": ${hdgst:-false}, 00:21:12.747 "ddgst": ${ddgst:-false} 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 } 00:21:12.747 EOF 00:21:12.747 )") 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.747 [2024-07-15 21:59:06.809808] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:12.747 [2024-07-15 21:59:06.809861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.747 { 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme$subsystem", 00:21:12.747 "trtype": "$TEST_TRANSPORT", 00:21:12.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "$NVMF_PORT", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.747 "hdgst": ${hdgst:-false}, 00:21:12.747 "ddgst": ${ddgst:-false} 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 } 00:21:12.747 EOF 00:21:12.747 )") 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.747 { 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme$subsystem", 00:21:12.747 "trtype": "$TEST_TRANSPORT", 00:21:12.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "$NVMF_PORT", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.747 "hdgst": ${hdgst:-false}, 00:21:12.747 "ddgst": ${ddgst:-false} 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 } 00:21:12.747 EOF 00:21:12.747 )") 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.747 { 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme$subsystem", 00:21:12.747 "trtype": "$TEST_TRANSPORT", 00:21:12.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "$NVMF_PORT", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.747 "hdgst": ${hdgst:-false}, 00:21:12.747 "ddgst": ${ddgst:-false} 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 } 00:21:12.747 EOF 00:21:12.747 )") 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:12.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:12.747 21:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme1", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme2", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme3", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme4", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme5", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme6", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme7", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme8", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme9", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 },{ 00:21:12.747 "params": { 00:21:12.747 "name": "Nvme10", 00:21:12.747 "trtype": "tcp", 00:21:12.747 "traddr": "10.0.0.2", 00:21:12.747 "adrfam": "ipv4", 00:21:12.747 "trsvcid": "4420", 00:21:12.747 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:12.747 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:12.747 "hdgst": false, 00:21:12.747 "ddgst": false 00:21:12.747 }, 00:21:12.747 "method": "bdev_nvme_attach_controller" 00:21:12.747 }' 00:21:12.747 [2024-07-15 21:59:06.866537] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.747 [2024-07-15 21:59:06.941630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3755284 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:14.644 21:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:15.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3755284 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3754998 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.208 { 00:21:15.208 "params": { 00:21:15.208 "name": "Nvme$subsystem", 00:21:15.208 "trtype": "$TEST_TRANSPORT", 00:21:15.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.208 "adrfam": "ipv4", 00:21:15.208 "trsvcid": "$NVMF_PORT", 00:21:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.208 "hdgst": ${hdgst:-false}, 00:21:15.208 "ddgst": ${ddgst:-false} 00:21:15.208 }, 00:21:15.208 "method": "bdev_nvme_attach_controller" 00:21:15.208 } 00:21:15.208 EOF 00:21:15.208 )") 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.208 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.209 { 00:21:15.209 "params": { 00:21:15.209 "name": "Nvme$subsystem", 00:21:15.209 "trtype": "$TEST_TRANSPORT", 00:21:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.209 "adrfam": "ipv4", 00:21:15.209 "trsvcid": "$NVMF_PORT", 00:21:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.209 "hdgst": ${hdgst:-false}, 00:21:15.209 "ddgst": ${ddgst:-false} 00:21:15.209 }, 00:21:15.209 "method": "bdev_nvme_attach_controller" 00:21:15.209 } 00:21:15.209 EOF 00:21:15.209 )") 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.209 [2024-07-15 21:59:09.428809] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:15.209 [2024-07-15 21:59:09.428859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755650 ] 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.209 { 00:21:15.209 "params": { 00:21:15.209 "name": "Nvme$subsystem", 00:21:15.209 "trtype": "$TEST_TRANSPORT", 00:21:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.209 "adrfam": "ipv4", 00:21:15.209 "trsvcid": "$NVMF_PORT", 00:21:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.209 "hdgst": ${hdgst:-false}, 00:21:15.209 "ddgst": ${ddgst:-false} 00:21:15.209 }, 00:21:15.209 "method": "bdev_nvme_attach_controller" 00:21:15.209 } 00:21:15.209 EOF 00:21:15.209 )") 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.209 { 00:21:15.209 "params": { 00:21:15.209 "name": "Nvme$subsystem", 00:21:15.209 "trtype": "$TEST_TRANSPORT", 00:21:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.209 "adrfam": "ipv4", 00:21:15.209 "trsvcid": "$NVMF_PORT", 00:21:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.209 "hdgst": ${hdgst:-false}, 00:21:15.209 "ddgst": ${ddgst:-false} 00:21:15.209 }, 00:21:15.209 "method": "bdev_nvme_attach_controller" 00:21:15.209 } 00:21:15.209 EOF 00:21:15.209 )") 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.209 { 00:21:15.209 "params": { 00:21:15.209 "name": "Nvme$subsystem", 00:21:15.209 "trtype": "$TEST_TRANSPORT", 00:21:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.209 "adrfam": "ipv4", 00:21:15.209 "trsvcid": "$NVMF_PORT", 00:21:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.209 "hdgst": ${hdgst:-false}, 00:21:15.209 "ddgst": ${ddgst:-false} 00:21:15.209 }, 00:21:15.209 "method": "bdev_nvme_attach_controller" 00:21:15.209 } 00:21:15.209 EOF 00:21:15.209 )") 00:21:15.209 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:15.487 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:15.487 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:15.487 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.487 21:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme1", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme2", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme3", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme4", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme5", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme6", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme7", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme8", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme9", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 },{ 00:21:15.487 "params": { 00:21:15.487 "name": "Nvme10", 00:21:15.487 "trtype": "tcp", 00:21:15.487 "traddr": "10.0.0.2", 00:21:15.487 "adrfam": "ipv4", 00:21:15.487 "trsvcid": "4420", 00:21:15.487 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.487 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.487 "hdgst": false, 00:21:15.487 "ddgst": false 00:21:15.487 }, 00:21:15.487 "method": "bdev_nvme_attach_controller" 00:21:15.487 }' 00:21:15.487 [2024-07-15 21:59:09.487851] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.487 [2024-07-15 21:59:09.563377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.858 Running I/O for 1 seconds... 00:21:18.231 00:21:18.231 Latency(us) 00:21:18.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme1n1 : 1.06 241.30 15.08 0.00 0.00 262632.40 18008.15 218833.25 00:21:18.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme2n1 : 1.12 229.31 14.33 0.00 0.00 272660.70 18805.98 246187.41 00:21:18.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme3n1 : 1.13 283.81 17.74 0.00 0.00 217099.40 25302.59 208803.39 00:21:18.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme4n1 : 1.11 292.13 18.26 0.00 0.00 207022.35 15956.59 217921.45 00:21:18.231 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme5n1 : 1.13 282.94 17.68 0.00 0.00 211497.32 15956.59 217921.45 00:21:18.231 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme6n1 : 1.16 276.75 17.30 0.00 0.00 213356.19 22111.28 226127.69 00:21:18.231 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme7n1 : 1.12 289.53 18.10 0.00 0.00 199426.26 6097.70 217921.45 00:21:18.231 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme8n1 : 1.12 287.31 17.96 0.00 0.00 198595.54 3048.85 214274.23 00:21:18.231 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme9n1 : 1.16 276.03 17.25 0.00 0.00 204426.37 16184.54 228863.11 00:21:18.231 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.231 Verification LBA range: start 0x0 length 0x400 00:21:18.231 Nvme10n1 : 1.17 328.20 20.51 0.00 0.00 169418.35 4872.46 219745.06 00:21:18.231 =================================================================================================================== 00:21:18.231 Total : 2787.32 174.21 0.00 0.00 212509.93 3048.85 246187.41 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.231 rmmod nvme_tcp 00:21:18.231 rmmod nvme_fabrics 00:21:18.231 rmmod nvme_keyring 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3754998 ']' 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3754998 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3754998 ']' 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3754998 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.231 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3754998 00:21:18.490 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.490 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.490 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3754998' 00:21:18.490 killing process with pid 3754998 00:21:18.490 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3754998 00:21:18.490 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3754998 00:21:18.748 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.748 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.748 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.748 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.749 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.749 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.749 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.749 21:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.284 00:21:21.284 real 0m15.083s 00:21:21.284 user 0m34.903s 00:21:21.284 sys 0m5.432s 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.284 ************************************ 00:21:21.284 END TEST nvmf_shutdown_tc1 00:21:21.284 ************************************ 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.284 ************************************ 00:21:21.284 START TEST nvmf_shutdown_tc2 00:21:21.284 ************************************ 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.284 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.285 21:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.285 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.285 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:21:21.285 00:21:21.285 --- 10.0.0.2 ping statistics --- 00:21:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.285 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:21:21.285 00:21:21.285 --- 10.0.0.1 ping statistics --- 00:21:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.285 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:21.285 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3756787 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3756787 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3756787 ']' 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.286 21:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.286 [2024-07-15 21:59:15.313188] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:21.286 [2024-07-15 21:59:15.313240] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.286 [2024-07-15 21:59:15.376041] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.286 [2024-07-15 21:59:15.455419] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.286 [2024-07-15 21:59:15.455454] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.286 [2024-07-15 21:59:15.455462] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.286 [2024-07-15 21:59:15.455468] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.286 [2024-07-15 21:59:15.455473] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.286 [2024-07-15 21:59:15.455521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.286 [2024-07-15 21:59:15.455540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.286 [2024-07-15 21:59:15.455649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.286 [2024-07-15 21:59:15.455650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.967 [2024-07-15 21:59:16.161034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.967 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.229 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.229 Malloc1 00:21:22.229 [2024-07-15 21:59:16.256827] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.229 Malloc2 00:21:22.229 Malloc3 00:21:22.229 Malloc4 00:21:22.229 Malloc5 00:21:22.229 Malloc6 00:21:22.488 Malloc7 00:21:22.488 Malloc8 00:21:22.488 Malloc9 00:21:22.488 Malloc10 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3757065 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3757065 /var/tmp/bdevperf.sock 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3757065 ']' 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.488 { 00:21:22.488 "params": { 00:21:22.488 "name": "Nvme$subsystem", 00:21:22.488 "trtype": "$TEST_TRANSPORT", 00:21:22.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.488 "adrfam": "ipv4", 00:21:22.488 "trsvcid": "$NVMF_PORT", 00:21:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.488 "hdgst": ${hdgst:-false}, 00:21:22.488 "ddgst": ${ddgst:-false} 00:21:22.488 }, 00:21:22.488 "method": "bdev_nvme_attach_controller" 00:21:22.488 } 00:21:22.488 EOF 00:21:22.488 )") 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.488 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.488 { 00:21:22.488 "params": { 00:21:22.488 "name": "Nvme$subsystem", 00:21:22.488 "trtype": "$TEST_TRANSPORT", 00:21:22.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.488 "adrfam": "ipv4", 00:21:22.488 "trsvcid": "$NVMF_PORT", 00:21:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.488 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.489 { 00:21:22.489 "params": { 00:21:22.489 "name": "Nvme$subsystem", 00:21:22.489 "trtype": "$TEST_TRANSPORT", 00:21:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.489 "adrfam": "ipv4", 00:21:22.489 "trsvcid": "$NVMF_PORT", 00:21:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.489 "hdgst": ${hdgst:-false}, 00:21:22.489 "ddgst": ${ddgst:-false} 00:21:22.489 }, 00:21:22.489 "method": "bdev_nvme_attach_controller" 00:21:22.489 } 00:21:22.489 EOF 00:21:22.489 )") 00:21:22.489 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.489 [2024-07-15 21:59:16.725801] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:22.489 [2024-07-15 21:59:16.725852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757065 ] 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.748 { 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme$subsystem", 00:21:22.748 "trtype": "$TEST_TRANSPORT", 00:21:22.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "$NVMF_PORT", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.748 "hdgst": ${hdgst:-false}, 00:21:22.748 "ddgst": ${ddgst:-false} 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 } 00:21:22.748 EOF 00:21:22.748 )") 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.748 { 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme$subsystem", 00:21:22.748 "trtype": "$TEST_TRANSPORT", 00:21:22.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "$NVMF_PORT", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.748 "hdgst": ${hdgst:-false}, 00:21:22.748 "ddgst": ${ddgst:-false} 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 } 00:21:22.748 EOF 00:21:22.748 )") 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:22.748 21:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme1", 00:21:22.748 "trtype": "tcp", 00:21:22.748 "traddr": "10.0.0.2", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "4420", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.748 "hdgst": false, 00:21:22.748 "ddgst": false 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 },{ 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme2", 00:21:22.748 "trtype": "tcp", 00:21:22.748 "traddr": "10.0.0.2", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "4420", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:22.748 "hdgst": false, 00:21:22.748 "ddgst": false 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 },{ 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme3", 00:21:22.748 "trtype": "tcp", 00:21:22.748 "traddr": "10.0.0.2", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "4420", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:22.748 "hdgst": false, 00:21:22.748 "ddgst": false 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 },{ 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme4", 00:21:22.748 "trtype": "tcp", 00:21:22.748 "traddr": "10.0.0.2", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "4420", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:22.748 "hdgst": false, 00:21:22.748 "ddgst": false 00:21:22.748 }, 00:21:22.748 "method": "bdev_nvme_attach_controller" 00:21:22.748 },{ 00:21:22.748 "params": { 00:21:22.748 "name": "Nvme5", 00:21:22.748 "trtype": "tcp", 00:21:22.748 "traddr": "10.0.0.2", 00:21:22.748 "adrfam": "ipv4", 00:21:22.748 "trsvcid": "4420", 00:21:22.748 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:22.748 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:22.748 "hdgst": false, 00:21:22.748 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 },{ 00:21:22.749 "params": { 00:21:22.749 "name": "Nvme6", 00:21:22.749 "trtype": "tcp", 00:21:22.749 "traddr": "10.0.0.2", 00:21:22.749 "adrfam": "ipv4", 00:21:22.749 "trsvcid": "4420", 00:21:22.749 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:22.749 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:22.749 "hdgst": false, 00:21:22.749 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 },{ 00:21:22.749 "params": { 00:21:22.749 "name": "Nvme7", 00:21:22.749 "trtype": "tcp", 00:21:22.749 "traddr": "10.0.0.2", 00:21:22.749 "adrfam": "ipv4", 00:21:22.749 "trsvcid": "4420", 00:21:22.749 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:22.749 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:22.749 "hdgst": false, 00:21:22.749 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 },{ 00:21:22.749 "params": { 00:21:22.749 "name": "Nvme8", 00:21:22.749 "trtype": "tcp", 00:21:22.749 "traddr": "10.0.0.2", 00:21:22.749 "adrfam": "ipv4", 00:21:22.749 "trsvcid": "4420", 00:21:22.749 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:22.749 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:22.749 "hdgst": false, 00:21:22.749 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 },{ 00:21:22.749 "params": { 00:21:22.749 "name": "Nvme9", 00:21:22.749 "trtype": "tcp", 00:21:22.749 "traddr": "10.0.0.2", 00:21:22.749 "adrfam": "ipv4", 00:21:22.749 "trsvcid": "4420", 00:21:22.749 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:22.749 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:22.749 "hdgst": false, 00:21:22.749 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 },{ 00:21:22.749 "params": { 00:21:22.749 "name": "Nvme10", 00:21:22.749 "trtype": "tcp", 00:21:22.749 "traddr": "10.0.0.2", 00:21:22.749 "adrfam": "ipv4", 00:21:22.749 "trsvcid": "4420", 00:21:22.749 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:22.749 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:22.749 "hdgst": false, 00:21:22.749 "ddgst": false 00:21:22.749 }, 00:21:22.749 "method": "bdev_nvme_attach_controller" 00:21:22.749 }' 00:21:22.749 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.749 [2024-07-15 21:59:16.782966] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.749 [2024-07-15 21:59:16.856604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.670 Running I/O for 10 seconds... 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.670 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.928 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:24.928 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:24.928 21:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:25.186 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3757065 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3757065 ']' 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3757065 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3757065 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3757065' 00:21:25.187 killing process with pid 3757065 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3757065 00:21:25.187 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3757065 00:21:25.187 Received shutdown signal, test time was about 0.905418 seconds 00:21:25.187 00:21:25.187 Latency(us) 00:21:25.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme1n1 : 0.90 282.95 17.68 0.00 0.00 223770.27 20857.54 217009.64 00:21:25.187 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme2n1 : 0.88 294.86 18.43 0.00 0.00 209577.22 3490.50 213362.42 00:21:25.187 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme3n1 : 0.87 292.65 18.29 0.00 0.00 208368.86 13791.05 216097.84 00:21:25.187 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme4n1 : 0.89 289.26 18.08 0.00 0.00 206787.01 15500.69 214274.23 00:21:25.187 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme5n1 : 0.90 285.09 17.82 0.00 0.00 206073.32 18350.08 217921.45 00:21:25.187 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme6n1 : 0.89 286.35 17.90 0.00 0.00 201266.98 18122.13 217009.64 00:21:25.187 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme7n1 : 0.89 288.44 18.03 0.00 0.00 195638.21 13734.07 216097.84 00:21:25.187 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme8n1 : 0.90 283.79 17.74 0.00 0.00 195439.75 19261.89 216097.84 00:21:25.187 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme9n1 : 0.86 222.15 13.88 0.00 0.00 242999.95 20059.71 220656.86 00:21:25.187 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.187 Verification LBA range: start 0x0 length 0x400 00:21:25.187 Nvme10n1 : 0.88 219.24 13.70 0.00 0.00 240752.19 20629.59 244363.80 00:21:25.187 =================================================================================================================== 00:21:25.187 Total : 2744.78 171.55 0.00 0.00 211547.89 3490.50 244363.80 00:21:25.446 21:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3756787 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:26.380 rmmod nvme_tcp 00:21:26.380 rmmod nvme_fabrics 00:21:26.380 rmmod nvme_keyring 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3756787 ']' 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3756787 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3756787 ']' 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3756787 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.380 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3756787 00:21:26.639 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.639 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.639 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3756787' 00:21:26.639 killing process with pid 3756787 00:21:26.640 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3756787 00:21:26.640 21:59:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3756787 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.899 21:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.428 00:21:29.428 real 0m8.096s 00:21:29.428 user 0m24.937s 00:21:29.428 sys 0m1.310s 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.428 ************************************ 00:21:29.428 END TEST nvmf_shutdown_tc2 00:21:29.428 ************************************ 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:29.428 ************************************ 00:21:29.428 START TEST nvmf_shutdown_tc3 00:21:29.428 ************************************ 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:29.428 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:29.428 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.428 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:29.429 Found net devices under 0000:86:00.0: cvl_0_0 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:29.429 Found net devices under 0000:86:00.1: cvl_0_1 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:29.429 00:21:29.429 --- 10.0.0.2 ping statistics --- 00:21:29.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.429 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:21:29.429 00:21:29.429 --- 10.0.0.1 ping statistics --- 00:21:29.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.429 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3758167 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3758167 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3758167 ']' 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.429 21:59:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 [2024-07-15 21:59:23.508992] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:29.429 [2024-07-15 21:59:23.509031] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.429 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.429 [2024-07-15 21:59:23.569046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.429 [2024-07-15 21:59:23.642567] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.429 [2024-07-15 21:59:23.642609] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.429 [2024-07-15 21:59:23.642616] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.429 [2024-07-15 21:59:23.642622] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.429 [2024-07-15 21:59:23.642626] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.429 [2024-07-15 21:59:23.642731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.429 [2024-07-15 21:59:23.642816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.429 [2024-07-15 21:59:23.642903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.429 [2024-07-15 21:59:23.642904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 [2024-07-15 21:59:24.352292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.365 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 Malloc1 00:21:30.365 [2024-07-15 21:59:24.448158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.365 Malloc2 00:21:30.365 Malloc3 00:21:30.365 Malloc4 00:21:30.365 Malloc5 00:21:30.623 Malloc6 00:21:30.623 Malloc7 00:21:30.623 Malloc8 00:21:30.623 Malloc9 00:21:30.623 Malloc10 00:21:30.623 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.623 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:30.623 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.623 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3758444 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3758444 /var/tmp/bdevperf.sock 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3758444 ']' 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.883 [2024-07-15 21:59:24.917575] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:30.883 [2024-07-15 21:59:24.917627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758444 ] 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.883 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.883 { 00:21:30.883 "params": { 00:21:30.883 "name": "Nvme$subsystem", 00:21:30.883 "trtype": "$TEST_TRANSPORT", 00:21:30.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.883 "adrfam": "ipv4", 00:21:30.883 "trsvcid": "$NVMF_PORT", 00:21:30.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.883 "hdgst": ${hdgst:-false}, 00:21:30.883 "ddgst": ${ddgst:-false} 00:21:30.883 }, 00:21:30.883 "method": "bdev_nvme_attach_controller" 00:21:30.883 } 00:21:30.883 EOF 00:21:30.883 )") 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.884 { 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme$subsystem", 00:21:30.884 "trtype": "$TEST_TRANSPORT", 00:21:30.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "$NVMF_PORT", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.884 "hdgst": ${hdgst:-false}, 00:21:30.884 "ddgst": ${ddgst:-false} 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 } 00:21:30.884 EOF 00:21:30.884 )") 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.884 { 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme$subsystem", 00:21:30.884 "trtype": "$TEST_TRANSPORT", 00:21:30.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "$NVMF_PORT", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.884 "hdgst": ${hdgst:-false}, 00:21:30.884 "ddgst": ${ddgst:-false} 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 } 00:21:30.884 EOF 00:21:30.884 )") 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:30.884 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.884 21:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme1", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme2", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme3", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme4", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme5", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme6", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme7", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme8", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme9", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 },{ 00:21:30.884 "params": { 00:21:30.884 "name": "Nvme10", 00:21:30.884 "trtype": "tcp", 00:21:30.884 "traddr": "10.0.0.2", 00:21:30.884 "adrfam": "ipv4", 00:21:30.884 "trsvcid": "4420", 00:21:30.884 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:30.884 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:30.884 "hdgst": false, 00:21:30.884 "ddgst": false 00:21:30.884 }, 00:21:30.884 "method": "bdev_nvme_attach_controller" 00:21:30.884 }' 00:21:30.884 [2024-07-15 21:59:24.975470] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.884 [2024-07-15 21:59:25.048962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.786 Running I/O for 10 seconds... 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:32.786 21:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3758167 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3758167 ']' 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3758167 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758167 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:33.058 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758167' 00:21:33.058 killing process with pid 3758167 00:21:33.059 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3758167 00:21:33.059 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3758167 00:21:33.059 [2024-07-15 21:59:27.144290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.144743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5e90 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.059 [2024-07-15 21:59:27.145863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.145996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.146125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8af0 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.147996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.060 [2024-07-15 21:59:27.148138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6370 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e070 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.148364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.061 [2024-07-15 21:59:27.148414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.061 [2024-07-15 21:59:27.148421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb37d60 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.149506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6870 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.061 [2024-07-15 21:59:27.150442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.150477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6d50 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.151689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7230 is same with the state(5) to be set 00:21:33.062 [2024-07-15 21:59:27.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.062 [2024-07-15 21:59:27.153545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.062 [2024-07-15 21:59:27.153553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.153987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.153993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.063 [2024-07-15 21:59:27.154079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.063 [2024-07-15 21:59:27.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.154427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.154490] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe81b0 was disconnected and freed. reset controller. 00:21:33.064 [2024-07-15 21:59:27.156733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7730 is same with the state(5) to be set 00:21:33.064 [2024-07-15 21:59:27.156754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7730 is same with the state(5) to be set 00:21:33.064 [2024-07-15 21:59:27.156761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7730 is same with the state(5) to be set 00:21:33.064 [2024-07-15 21:59:27.157126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.064 [2024-07-15 21:59:27.157403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.064 [2024-07-15 21:59:27.157411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7c10 is same with the state(5) to be set 00:21:33.065 [2024-07-15 21:59:27.157699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7c10 is same with the state(5) to be set 00:21:33.065 [2024-07-15 21:59:27.157707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.157996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.158003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.158012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.158018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.158027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.065 [2024-07-15 21:59:27.158042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.065 [2024-07-15 21:59:27.158048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:33.066 [2024-07-15 21:59:27.158198] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bde00 was disconnected and freed. reset controller. 00:21:33.066 [2024-07-15 21:59:27.158403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:1[2024-07-15 21:59:27.158503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:59:27.158555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:59:27.158577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:59:27.158615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with [2024-07-15 21:59:27.158659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:1the state(5) to be set 00:21:33.066 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.066 [2024-07-15 21:59:27.158681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.066 [2024-07-15 21:59:27.158690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.066 [2024-07-15 21:59:27.158798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.158804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.158810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.158816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.158823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8110 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.159783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e85f0 is same with the state(5) to be set 00:21:33.067 [2024-07-15 21:59:27.168154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.067 [2024-07-15 21:59:27.168168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.067 [2024-07-15 21:59:27.168181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.067 [2024-07-15 21:59:27.168191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.168999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.169008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.169020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.169030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.169041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.169051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.169061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.169071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.068 [2024-07-15 21:59:27.169083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.068 [2024-07-15 21:59:27.169093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-07-15 21:59:27.169307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:33.069 [2024-07-15 21:59:27.169393] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1465810 was disconnected and freed. reset controller. 00:21:33.069 [2024-07-15 21:59:27.169593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:33.069 [2024-07-15 21:59:27.169657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63ba0 (9): Bad file descriptor 00:21:33.069 [2024-07-15 21:59:27.169688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc95bf0 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.169803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6a80 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.169915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.169987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.169998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66740 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.170028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03ac0 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.170127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9e070 (9): Bad file descriptor 00:21:33.069 [2024-07-15 21:59:27.170149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb37d60 (9): Bad file descriptor 00:21:33.069 [2024-07-15 21:59:27.170178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6864e0 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.170289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ed0 is same with the state(5) to be set 00:21:33.069 [2024-07-15 21:59:27.170397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.069 [2024-07-15 21:59:27.170427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.069 [2024-07-15 21:59:27.170437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.070 [2024-07-15 21:59:27.170450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.170460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.070 [2024-07-15 21:59:27.170469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.170478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd004e0 is same with the state(5) to be set 00:21:33.070 [2024-07-15 21:59:27.173320] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:33.070 [2024-07-15 21:59:27.173357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:33.070 [2024-07-15 21:59:27.173372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:33.070 [2024-07-15 21:59:27.173388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7ed0 (9): Bad file descriptor 00:21:33.070 [2024-07-15 21:59:27.173400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6864e0 (9): Bad file descriptor 00:21:33.070 [2024-07-15 21:59:27.173475] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:33.070 [2024-07-15 21:59:27.174264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.070 [2024-07-15 21:59:27.174289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63ba0 with addr=10.0.0.2, port=4420 00:21:33.070 [2024-07-15 21:59:27.174300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63ba0 is same with the state(5) to be set 00:21:33.070 [2024-07-15 21:59:27.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.174987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.174999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.175020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.175042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.175065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.175086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.070 [2024-07-15 21:59:27.175107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-07-15 21:59:27.175116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe64e0 is same with the state(5) to be set 00:21:33.071 [2024-07-15 21:59:27.175816] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe64e0 was disconnected and freed. reset controller. 00:21:33.071 [2024-07-15 21:59:27.175840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.071 [2024-07-15 21:59:27.175924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-07-15 21:59:27.175936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.175947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.175956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.175967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.175976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.175988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.175996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.072 [2024-07-15 21:59:27.176789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.072 [2024-07-15 21:59:27.176800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.176983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.176991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.177133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.177144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe6ce0 is same with the state(5) to be set 00:21:33.073 [2024-07-15 21:59:27.177216] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe6ce0 was disconnected and freed. reset controller. 00:21:33.073 [2024-07-15 21:59:27.177296] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:33.073 [2024-07-15 21:59:27.177348] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:33.073 [2024-07-15 21:59:27.178255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.073 [2024-07-15 21:59:27.178277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6864e0 with addr=10.0.0.2, port=4420 00:21:33.073 [2024-07-15 21:59:27.178287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6864e0 is same with the state(5) to be set 00:21:33.073 [2024-07-15 21:59:27.178533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.073 [2024-07-15 21:59:27.178549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7ed0 with addr=10.0.0.2, port=4420 00:21:33.073 [2024-07-15 21:59:27.178558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ed0 is same with the state(5) to be set 00:21:33.073 [2024-07-15 21:59:27.178571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63ba0 (9): Bad file descriptor 00:21:33.073 [2024-07-15 21:59:27.181050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.073 [2024-07-15 21:59:27.181523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.073 [2024-07-15 21:59:27.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.181989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.181998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.074 [2024-07-15 21:59:27.182427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.074 [2024-07-15 21:59:27.182437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.182448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d2f0 is same with the state(5) to be set 00:21:33.075 [2024-07-15 21:59:27.182506] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x160d2f0 was disconnected and freed. reset controller. 00:21:33.075 [2024-07-15 21:59:27.182561] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:33.075 [2024-07-15 21:59:27.182583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.075 [2024-07-15 21:59:27.182597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:33.075 [2024-07-15 21:59:27.182624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03ac0 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6864e0 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7ed0 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:33.075 [2024-07-15 21:59:27.182678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:33.075 [2024-07-15 21:59:27.182688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:33.075 [2024-07-15 21:59:27.182717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc95bf0 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6a80 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66740 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.182787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd004e0 (9): Bad file descriptor 00:21:33.075 [2024-07-15 21:59:27.184074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.075 [2024-07-15 21:59:27.184097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:33.075 [2024-07-15 21:59:27.184389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.075 [2024-07-15 21:59:27.184423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb37d60 with addr=10.0.0.2, port=4420 00:21:33.075 [2024-07-15 21:59:27.184433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb37d60 is same with the state(5) to be set 00:21:33.075 [2024-07-15 21:59:27.184450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:33.075 [2024-07-15 21:59:27.184458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:33.075 [2024-07-15 21:59:27.184469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:33.075 [2024-07-15 21:59:27.184489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:33.075 [2024-07-15 21:59:27.184497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:33.075 [2024-07-15 21:59:27.184505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:33.075 [2024-07-15 21:59:27.185144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.075 [2024-07-15 21:59:27.185719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.075 [2024-07-15 21:59:27.185729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.185989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.185999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.076 [2024-07-15 21:59:27.186448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.076 [2024-07-15 21:59:27.186455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdfa50 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.187759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.187776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.187788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:33.077 [2024-07-15 21:59:27.188108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.188123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd03ac0 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.188131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03ac0 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.188396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.188408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd004e0 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.188416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd004e0 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.188426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb37d60 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.188718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:33.077 [2024-07-15 21:59:27.188933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.188947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9e070 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.188954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e070 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.188964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03ac0 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.188974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd004e0 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.188981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.188988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.188995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.077 [2024-07-15 21:59:27.189261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:33.077 [2024-07-15 21:59:27.189274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:33.077 [2024-07-15 21:59:27.189283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.189430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.189446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63ba0 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.189455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63ba0 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.189464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9e070 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.189472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.189478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.189484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:33.077 [2024-07-15 21:59:27.189494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.189501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.189508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:33.077 [2024-07-15 21:59:27.189545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.189553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.189759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.189770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7ed0 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.189778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ed0 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.190013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.077 [2024-07-15 21:59:27.190025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6864e0 with addr=10.0.0.2, port=4420 00:21:33.077 [2024-07-15 21:59:27.190033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6864e0 is same with the state(5) to be set 00:21:33.077 [2024-07-15 21:59:27.190041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63ba0 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.190050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.190056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.190062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:33.077 [2024-07-15 21:59:27.190093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.190102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7ed0 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.190111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6864e0 (9): Bad file descriptor 00:21:33.077 [2024-07-15 21:59:27.190119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.190126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.190132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:33.077 [2024-07-15 21:59:27.190156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.190164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.190170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.190180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:33.077 [2024-07-15 21:59:27.190190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:33.077 [2024-07-15 21:59:27.190197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:33.077 [2024-07-15 21:59:27.190204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:33.077 [2024-07-15 21:59:27.190233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.190242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.077 [2024-07-15 21:59:27.192684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.077 [2024-07-15 21:59:27.192952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.077 [2024-07-15 21:59:27.192961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.192969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.192980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.192987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.192995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.078 [2024-07-15 21:59:27.193629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.078 [2024-07-15 21:59:27.193636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.193644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.193653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.193661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.193668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.193676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.193684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.193692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.193699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.193707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb31c30 is same with the state(5) to be set 00:21:33.079 [2024-07-15 21:59:27.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.194989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.194997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.079 [2024-07-15 21:59:27.195286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.079 [2024-07-15 21:59:27.195295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.195741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.195749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33140 is same with the state(5) to be set 00:21:33.080 [2024-07-15 21:59:27.196753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.080 [2024-07-15 21:59:27.196869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.080 [2024-07-15 21:59:27.196879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.196993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.081 [2024-07-15 21:59:27.197561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.081 [2024-07-15 21:59:27.197568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.082 [2024-07-15 21:59:27.197798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.082 [2024-07-15 21:59:27.197805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde550 is same with the state(5) to be set 00:21:33.082 [2024-07-15 21:59:27.199760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:33.082 [2024-07-15 21:59:27.199784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:33.082 task offset: 21248 on job bdev=Nvme3n1 fails 00:21:33.082 00:21:33.082 Latency(us) 00:21:33.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.082 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme1n1 ended in about 0.60 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme1n1 : 0.60 212.54 13.28 106.27 0.00 197761.41 16526.47 203332.56 00:21:33.082 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme2n1 ended in about 0.60 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme2n1 : 0.60 127.61 7.98 106.06 0.00 262709.43 17438.27 235245.75 00:21:33.082 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme3n1 ended in about 0.58 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme3n1 : 0.58 220.87 13.80 110.43 0.00 179666.07 7123.48 206979.78 00:21:33.082 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme4n1 ended in about 0.62 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme4n1 : 0.62 213.85 13.37 103.68 0.00 183090.26 10143.83 215186.03 00:21:33.082 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme5n1 ended in about 0.62 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme5n1 : 0.62 103.34 6.46 103.34 0.00 273658.66 24846.69 246187.41 00:21:33.082 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme6n1 ended in about 0.59 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme6n1 : 0.59 215.35 13.46 107.67 0.00 168775.09 13905.03 199685.34 00:21:33.082 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme7n1 ended in about 0.60 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme7n1 : 0.60 214.91 13.43 107.45 0.00 163969.86 13734.07 215186.03 00:21:33.082 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme8n1 ended in about 0.61 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme8n1 : 0.61 211.05 13.19 105.52 0.00 162332.49 20629.59 186008.26 00:21:33.082 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme9n1 ended in about 0.62 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme9n1 : 0.62 103.00 6.44 103.00 0.00 243149.69 18692.01 229774.91 00:21:33.082 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.082 Job: Nvme10n1 ended in about 0.61 seconds with error 00:21:33.082 Verification LBA range: start 0x0 length 0x400 00:21:33.082 Nvme10n1 : 0.61 104.91 6.56 104.91 0.00 229612.86 25986.45 228863.11 00:21:33.082 =================================================================================================================== 00:21:33.082 Total : 1727.43 107.96 1058.36 0.00 199875.42 7123.48 246187.41 00:21:33.082 [2024-07-15 21:59:27.227014] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:33.082 [2024-07-15 21:59:27.227058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:33.082 [2024-07-15 21:59:27.227591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.082 [2024-07-15 21:59:27.227616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66740 with addr=10.0.0.2, port=4420 00:21:33.082 [2024-07-15 21:59:27.227628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66740 is same with the state(5) to be set 00:21:33.082 [2024-07-15 21:59:27.227827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.082 [2024-07-15 21:59:27.227841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf6a80 with addr=10.0.0.2, port=4420 00:21:33.082 [2024-07-15 21:59:27.227850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6a80 is same with the state(5) to be set 00:21:33.082 [2024-07-15 21:59:27.228073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.082 [2024-07-15 21:59:27.228086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc95bf0 with addr=10.0.0.2, port=4420 00:21:33.082 [2024-07-15 21:59:27.228095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc95bf0 is same with the state(5) to be set 00:21:33.082 [2024-07-15 21:59:27.228122] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228136] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228149] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228162] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228180] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228192] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.228204] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:33.082 [2024-07-15 21:59:27.229025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:33.082 [2024-07-15 21:59:27.229163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66740 (9): Bad file descriptor 00:21:33.082 [2024-07-15 21:59:27.229179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6a80 (9): Bad file descriptor 00:21:33.082 [2024-07-15 21:59:27.229190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc95bf0 (9): Bad file descriptor 00:21:33.082 [2024-07-15 21:59:27.229532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.082 [2024-07-15 21:59:27.229550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb37d60 with addr=10.0.0.2, port=4420 00:21:33.082 [2024-07-15 21:59:27.229559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb37d60 is same with the state(5) to be set 00:21:33.082 [2024-07-15 21:59:27.229690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.082 [2024-07-15 21:59:27.229704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd004e0 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.229714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd004e0 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.229867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.083 [2024-07-15 21:59:27.229880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd03ac0 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.229890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03ac0 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.230133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.083 [2024-07-15 21:59:27.230147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9e070 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.230156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e070 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.230419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.083 [2024-07-15 21:59:27.230435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63ba0 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.230444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63ba0 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.230695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.083 [2024-07-15 21:59:27.230709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6864e0 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.230718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6864e0 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.230960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.083 [2024-07-15 21:59:27.230974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7ed0 with addr=10.0.0.2, port=4420 00:21:33.083 [2024-07-15 21:59:27.230983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ed0 is same with the state(5) to be set 00:21:33.083 [2024-07-15 21:59:27.230992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb37d60 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd004e0 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03ac0 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9e070 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63ba0 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6864e0 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7ed0 (9): Bad file descriptor 00:21:33.083 [2024-07-15 21:59:27.231262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:33.083 [2024-07-15 21:59:27.231442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:33.083 [2024-07-15 21:59:27.231450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:33.083 [2024-07-15 21:59:27.231491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.083 [2024-07-15 21:59:27.231550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.342 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:33.342 21:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:34.729 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3758444 00:21:34.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3758444) - No such process 00:21:34.729 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.730 rmmod nvme_tcp 00:21:34.730 rmmod nvme_fabrics 00:21:34.730 rmmod nvme_keyring 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.730 21:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.626 21:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.626 00:21:36.626 real 0m7.562s 00:21:36.626 user 0m18.159s 00:21:36.626 sys 0m1.137s 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 ************************************ 00:21:36.627 END TEST nvmf_shutdown_tc3 00:21:36.627 ************************************ 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:36.627 00:21:36.627 real 0m31.065s 00:21:36.627 user 1m18.133s 00:21:36.627 sys 0m8.093s 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.627 21:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 ************************************ 00:21:36.627 END TEST nvmf_shutdown 00:21:36.627 ************************************ 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:36.627 21:59:30 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 21:59:30 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 21:59:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:36.627 21:59:30 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.627 21:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 ************************************ 00:21:36.886 START TEST nvmf_multicontroller 00:21:36.886 ************************************ 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.886 * Looking for test storage... 00:21:36.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:36.886 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.887 21:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.887 21:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.887 21:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.887 21:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.887 21:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.150 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.150 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.150 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.150 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.150 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.151 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:21:42.409 00:21:42.409 --- 10.0.0.2 ping statistics --- 00:21:42.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.409 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:42.409 00:21:42.409 --- 10.0.0.1 ping statistics --- 00:21:42.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.409 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3762660 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3762660 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3762660 ']' 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.409 21:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.409 [2024-07-15 21:59:36.516208] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:42.409 [2024-07-15 21:59:36.516257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.409 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.409 [2024-07-15 21:59:36.570382] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:42.666 [2024-07-15 21:59:36.650534] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.666 [2024-07-15 21:59:36.650566] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.666 [2024-07-15 21:59:36.650573] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.666 [2024-07-15 21:59:36.650579] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.666 [2024-07-15 21:59:36.650585] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.666 [2024-07-15 21:59:36.650621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.666 [2024-07-15 21:59:36.650644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.666 [2024-07-15 21:59:36.650646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 [2024-07-15 21:59:37.368046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 Malloc0 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 [2024-07-15 21:59:37.431256] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 [2024-07-15 21:59:37.439191] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 Malloc1 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.231 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3762719 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3762719 /var/tmp/bdevperf.sock 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3762719 ']' 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.489 21:59:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.422 NVMe0n1 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.422 1 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.422 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 request: 00:21:44.423 { 00:21:44.423 "name": "NVMe0", 00:21:44.423 "trtype": "tcp", 00:21:44.423 "traddr": "10.0.0.2", 00:21:44.423 "adrfam": "ipv4", 00:21:44.423 "trsvcid": "4420", 00:21:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.423 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:44.423 "hostaddr": "10.0.0.2", 00:21:44.423 "hostsvcid": "60000", 00:21:44.423 "prchk_reftag": false, 00:21:44.423 "prchk_guard": false, 00:21:44.423 "hdgst": false, 00:21:44.423 "ddgst": false, 00:21:44.423 "method": "bdev_nvme_attach_controller", 00:21:44.423 "req_id": 1 00:21:44.423 } 00:21:44.423 Got JSON-RPC error response 00:21:44.423 response: 00:21:44.423 { 00:21:44.423 "code": -114, 00:21:44.423 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.423 } 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 request: 00:21:44.423 { 00:21:44.423 "name": "NVMe0", 00:21:44.423 "trtype": "tcp", 00:21:44.423 "traddr": "10.0.0.2", 00:21:44.423 "adrfam": "ipv4", 00:21:44.423 "trsvcid": "4420", 00:21:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:44.423 "hostaddr": "10.0.0.2", 00:21:44.423 "hostsvcid": "60000", 00:21:44.423 "prchk_reftag": false, 00:21:44.423 "prchk_guard": false, 00:21:44.423 "hdgst": false, 00:21:44.423 "ddgst": false, 00:21:44.423 "method": "bdev_nvme_attach_controller", 00:21:44.423 "req_id": 1 00:21:44.423 } 00:21:44.423 Got JSON-RPC error response 00:21:44.423 response: 00:21:44.423 { 00:21:44.423 "code": -114, 00:21:44.423 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.423 } 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 request: 00:21:44.423 { 00:21:44.423 "name": "NVMe0", 00:21:44.423 "trtype": "tcp", 00:21:44.423 "traddr": "10.0.0.2", 00:21:44.423 "adrfam": "ipv4", 00:21:44.423 "trsvcid": "4420", 00:21:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.423 "hostaddr": "10.0.0.2", 00:21:44.423 "hostsvcid": "60000", 00:21:44.423 "prchk_reftag": false, 00:21:44.423 "prchk_guard": false, 00:21:44.423 "hdgst": false, 00:21:44.423 "ddgst": false, 00:21:44.423 "multipath": "disable", 00:21:44.423 "method": "bdev_nvme_attach_controller", 00:21:44.423 "req_id": 1 00:21:44.423 } 00:21:44.423 Got JSON-RPC error response 00:21:44.423 response: 00:21:44.423 { 00:21:44.423 "code": -114, 00:21:44.423 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:44.423 } 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.423 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.682 request: 00:21:44.682 { 00:21:44.682 "name": "NVMe0", 00:21:44.682 "trtype": "tcp", 00:21:44.682 "traddr": "10.0.0.2", 00:21:44.682 "adrfam": "ipv4", 00:21:44.682 "trsvcid": "4420", 00:21:44.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.682 "hostaddr": "10.0.0.2", 00:21:44.682 "hostsvcid": "60000", 00:21:44.682 "prchk_reftag": false, 00:21:44.682 "prchk_guard": false, 00:21:44.682 "hdgst": false, 00:21:44.682 "ddgst": false, 00:21:44.682 "multipath": "failover", 00:21:44.682 "method": "bdev_nvme_attach_controller", 00:21:44.682 "req_id": 1 00:21:44.682 } 00:21:44.682 Got JSON-RPC error response 00:21:44.682 response: 00:21:44.682 { 00:21:44.682 "code": -114, 00:21:44.682 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.682 } 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.682 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.682 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.682 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.940 21:59:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.940 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:44.940 21:59:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.876 0 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3762719 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3762719 ']' 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3762719 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3762719 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3762719' 00:21:45.876 killing process with pid 3762719 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3762719 00:21:45.876 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3762719 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:46.135 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:46.135 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:46.135 [2024-07-15 21:59:37.542824] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:46.135 [2024-07-15 21:59:37.542872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762719 ] 00:21:46.135 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.135 [2024-07-15 21:59:37.599636] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.135 [2024-07-15 21:59:37.674499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.135 [2024-07-15 21:59:38.909061] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 072b7f85-3747-485a-b28b-c880ac4125af already exists 00:21:46.135 [2024-07-15 21:59:38.909090] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:072b7f85-3747-485a-b28b-c880ac4125af alias for bdev NVMe1n1 00:21:46.135 [2024-07-15 21:59:38.909098] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:46.135 Running I/O for 1 seconds... 00:21:46.136 00:21:46.136 Latency(us) 00:21:46.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.136 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:46.136 NVMe0n1 : 1.01 22984.93 89.78 0.00 0.00 5550.57 1517.30 6696.07 00:21:46.136 =================================================================================================================== 00:21:46.136 Total : 22984.93 89.78 0.00 0.00 5550.57 1517.30 6696.07 00:21:46.136 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.136 00:21:46.136 Latency(us) 00:21:46.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.136 =================================================================================================================== 00:21:46.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.136 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.136 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.136 rmmod nvme_tcp 00:21:46.136 rmmod nvme_fabrics 00:21:46.136 rmmod nvme_keyring 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3762660 ']' 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3762660 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3762660 ']' 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3762660 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3762660 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3762660' 00:21:46.395 killing process with pid 3762660 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3762660 00:21:46.395 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3762660 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.654 21:59:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.559 21:59:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.559 00:21:48.559 real 0m11.838s 00:21:48.559 user 0m16.494s 00:21:48.559 sys 0m4.874s 00:21:48.559 21:59:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.559 21:59:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.559 ************************************ 00:21:48.559 END TEST nvmf_multicontroller 00:21:48.559 ************************************ 00:21:48.559 21:59:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:48.559 21:59:42 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:48.559 21:59:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.559 21:59:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.559 21:59:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.559 ************************************ 00:21:48.559 START TEST nvmf_aer 00:21:48.559 ************************************ 00:21:48.559 21:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:48.818 * Looking for test storage... 00:21:48.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.818 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.819 21:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:54.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:54.181 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.181 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:54.182 Found net devices under 0000:86:00.0: cvl_0_0 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:54.182 Found net devices under 0000:86:00.1: cvl_0_1 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.182 21:59:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:21:54.182 00:21:54.182 --- 10.0.0.2 ping statistics --- 00:21:54.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.182 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:21:54.182 00:21:54.182 --- 10.0.0.1 ping statistics --- 00:21:54.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.182 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3766709 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3766709 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3766709 ']' 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.182 21:59:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.182 [2024-07-15 21:59:48.319385] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:21:54.182 [2024-07-15 21:59:48.319427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.182 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.182 [2024-07-15 21:59:48.376346] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.441 [2024-07-15 21:59:48.456948] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.441 [2024-07-15 21:59:48.456982] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.441 [2024-07-15 21:59:48.456989] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.441 [2024-07-15 21:59:48.456995] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.441 [2024-07-15 21:59:48.457000] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.441 [2024-07-15 21:59:48.457045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.441 [2024-07-15 21:59:48.457142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.441 [2024-07-15 21:59:48.457233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.441 [2024-07-15 21:59:48.457234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 [2024-07-15 21:59:49.180329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 Malloc0 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 [2024-07-15 21:59:49.232273] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.008 [ 00:21:55.008 { 00:21:55.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:55.008 "subtype": "Discovery", 00:21:55.008 "listen_addresses": [], 00:21:55.008 "allow_any_host": true, 00:21:55.008 "hosts": [] 00:21:55.008 }, 00:21:55.008 { 00:21:55.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.008 "subtype": "NVMe", 00:21:55.008 "listen_addresses": [ 00:21:55.008 { 00:21:55.008 "trtype": "TCP", 00:21:55.008 "adrfam": "IPv4", 00:21:55.008 "traddr": "10.0.0.2", 00:21:55.008 "trsvcid": "4420" 00:21:55.008 } 00:21:55.008 ], 00:21:55.008 "allow_any_host": true, 00:21:55.008 "hosts": [], 00:21:55.008 "serial_number": "SPDK00000000000001", 00:21:55.008 "model_number": "SPDK bdev Controller", 00:21:55.008 "max_namespaces": 2, 00:21:55.008 "min_cntlid": 1, 00:21:55.008 "max_cntlid": 65519, 00:21:55.008 "namespaces": [ 00:21:55.008 { 00:21:55.008 "nsid": 1, 00:21:55.008 "bdev_name": "Malloc0", 00:21:55.008 "name": "Malloc0", 00:21:55.008 "nguid": "874871A12242452F9983EFF4C1BBEAA9", 00:21:55.008 "uuid": "874871a1-2242-452f-9983-eff4c1bbeaa9" 00:21:55.008 } 00:21:55.008 ] 00:21:55.008 } 00:21:55.008 ] 00:21:55.008 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3766955 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:55.268 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.268 Malloc1 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.268 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.527 Asynchronous Event Request test 00:21:55.527 Attaching to 10.0.0.2 00:21:55.527 Attached to 10.0.0.2 00:21:55.527 Registering asynchronous event callbacks... 00:21:55.527 Starting namespace attribute notice tests for all controllers... 00:21:55.527 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:55.527 aer_cb - Changed Namespace 00:21:55.527 Cleaning up... 00:21:55.527 [ 00:21:55.527 { 00:21:55.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:55.527 "subtype": "Discovery", 00:21:55.527 "listen_addresses": [], 00:21:55.527 "allow_any_host": true, 00:21:55.527 "hosts": [] 00:21:55.527 }, 00:21:55.527 { 00:21:55.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.527 "subtype": "NVMe", 00:21:55.527 "listen_addresses": [ 00:21:55.527 { 00:21:55.527 "trtype": "TCP", 00:21:55.527 "adrfam": "IPv4", 00:21:55.527 "traddr": "10.0.0.2", 00:21:55.527 "trsvcid": "4420" 00:21:55.527 } 00:21:55.527 ], 00:21:55.527 "allow_any_host": true, 00:21:55.527 "hosts": [], 00:21:55.527 "serial_number": "SPDK00000000000001", 00:21:55.527 "model_number": "SPDK bdev Controller", 00:21:55.527 "max_namespaces": 2, 00:21:55.527 "min_cntlid": 1, 00:21:55.527 "max_cntlid": 65519, 00:21:55.527 "namespaces": [ 00:21:55.527 { 00:21:55.527 "nsid": 1, 00:21:55.527 "bdev_name": "Malloc0", 00:21:55.527 "name": "Malloc0", 00:21:55.527 "nguid": "874871A12242452F9983EFF4C1BBEAA9", 00:21:55.527 "uuid": "874871a1-2242-452f-9983-eff4c1bbeaa9" 00:21:55.527 }, 00:21:55.527 { 00:21:55.527 "nsid": 2, 00:21:55.527 "bdev_name": "Malloc1", 00:21:55.527 "name": "Malloc1", 00:21:55.527 "nguid": "1566AE65981641858351B1C35E3ACA29", 00:21:55.527 "uuid": "1566ae65-9816-4185-8351-b1c35e3aca29" 00:21:55.527 } 00:21:55.527 ] 00:21:55.527 } 00:21:55.527 ] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3766955 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.527 rmmod nvme_tcp 00:21:55.527 rmmod nvme_fabrics 00:21:55.527 rmmod nvme_keyring 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3766709 ']' 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3766709 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3766709 ']' 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3766709 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3766709 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3766709' 00:21:55.527 killing process with pid 3766709 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3766709 00:21:55.527 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3766709 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.784 21:59:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.318 21:59:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.318 00:21:58.318 real 0m9.161s 00:21:58.318 user 0m7.159s 00:21:58.318 sys 0m4.513s 00:21:58.318 21:59:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.318 21:59:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:58.318 ************************************ 00:21:58.318 END TEST nvmf_aer 00:21:58.318 ************************************ 00:21:58.318 21:59:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:58.318 21:59:51 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:58.318 21:59:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.318 21:59:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.318 21:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.318 ************************************ 00:21:58.318 START TEST nvmf_async_init 00:21:58.318 ************************************ 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:58.318 * Looking for test storage... 00:21:58.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b49cc3867bf4357b3d4a6b433ce0200 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.318 21:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:03.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:03.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:03.588 Found net devices under 0000:86:00.0: cvl_0_0 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:03.588 Found net devices under 0000:86:00.1: cvl_0_1 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.588 21:59:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:03.588 00:22:03.588 --- 10.0.0.2 ping statistics --- 00:22:03.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.588 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:22:03.588 00:22:03.588 --- 10.0.0.1 ping statistics --- 00:22:03.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.588 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.588 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3770256 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3770256 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3770256 ']' 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.589 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.589 [2024-07-15 21:59:57.097171] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:03.589 [2024-07-15 21:59:57.097221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.589 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.589 [2024-07-15 21:59:57.154266] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.589 [2024-07-15 21:59:57.233769] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.589 [2024-07-15 21:59:57.233802] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.589 [2024-07-15 21:59:57.233809] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.589 [2024-07-15 21:59:57.233815] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.589 [2024-07-15 21:59:57.233820] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.589 [2024-07-15 21:59:57.233838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 [2024-07-15 21:59:57.950213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 null0 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b49cc3867bf4357b3d4a6b433ce0200 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.847 [2024-07-15 21:59:57.990405] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.847 21:59:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.106 nvme0n1 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.106 [ 00:22:04.106 { 00:22:04.106 "name": "nvme0n1", 00:22:04.106 "aliases": [ 00:22:04.106 "8b49cc38-67bf-4357-b3d4-a6b433ce0200" 00:22:04.106 ], 00:22:04.106 "product_name": "NVMe disk", 00:22:04.106 "block_size": 512, 00:22:04.106 "num_blocks": 2097152, 00:22:04.106 "uuid": "8b49cc38-67bf-4357-b3d4-a6b433ce0200", 00:22:04.106 "assigned_rate_limits": { 00:22:04.106 "rw_ios_per_sec": 0, 00:22:04.106 "rw_mbytes_per_sec": 0, 00:22:04.106 "r_mbytes_per_sec": 0, 00:22:04.106 "w_mbytes_per_sec": 0 00:22:04.106 }, 00:22:04.106 "claimed": false, 00:22:04.106 "zoned": false, 00:22:04.106 "supported_io_types": { 00:22:04.106 "read": true, 00:22:04.106 "write": true, 00:22:04.106 "unmap": false, 00:22:04.106 "flush": true, 00:22:04.106 "reset": true, 00:22:04.106 "nvme_admin": true, 00:22:04.106 "nvme_io": true, 00:22:04.106 "nvme_io_md": false, 00:22:04.106 "write_zeroes": true, 00:22:04.106 "zcopy": false, 00:22:04.106 "get_zone_info": false, 00:22:04.106 "zone_management": false, 00:22:04.106 "zone_append": false, 00:22:04.106 "compare": true, 00:22:04.106 "compare_and_write": true, 00:22:04.106 "abort": true, 00:22:04.106 "seek_hole": false, 00:22:04.106 "seek_data": false, 00:22:04.106 "copy": true, 00:22:04.106 "nvme_iov_md": false 00:22:04.106 }, 00:22:04.106 "memory_domains": [ 00:22:04.106 { 00:22:04.106 "dma_device_id": "system", 00:22:04.106 "dma_device_type": 1 00:22:04.106 } 00:22:04.106 ], 00:22:04.106 "driver_specific": { 00:22:04.106 "nvme": [ 00:22:04.106 { 00:22:04.106 "trid": { 00:22:04.106 "trtype": "TCP", 00:22:04.106 "adrfam": "IPv4", 00:22:04.106 "traddr": "10.0.0.2", 00:22:04.106 "trsvcid": "4420", 00:22:04.106 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.106 }, 00:22:04.106 "ctrlr_data": { 00:22:04.106 "cntlid": 1, 00:22:04.106 "vendor_id": "0x8086", 00:22:04.106 "model_number": "SPDK bdev Controller", 00:22:04.106 "serial_number": "00000000000000000000", 00:22:04.106 "firmware_revision": "24.09", 00:22:04.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.106 "oacs": { 00:22:04.106 "security": 0, 00:22:04.106 "format": 0, 00:22:04.106 "firmware": 0, 00:22:04.106 "ns_manage": 0 00:22:04.106 }, 00:22:04.106 "multi_ctrlr": true, 00:22:04.106 "ana_reporting": false 00:22:04.106 }, 00:22:04.106 "vs": { 00:22:04.106 "nvme_version": "1.3" 00:22:04.106 }, 00:22:04.106 "ns_data": { 00:22:04.106 "id": 1, 00:22:04.106 "can_share": true 00:22:04.106 } 00:22:04.106 } 00:22:04.106 ], 00:22:04.106 "mp_policy": "active_passive" 00:22:04.106 } 00:22:04.106 } 00:22:04.106 ] 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.106 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.106 [2024-07-15 21:59:58.238933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:04.106 [2024-07-15 21:59:58.238987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2632350 (9): Bad file descriptor 00:22:04.365 [2024-07-15 21:59:58.371303] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [ 00:22:04.365 { 00:22:04.365 "name": "nvme0n1", 00:22:04.365 "aliases": [ 00:22:04.365 "8b49cc38-67bf-4357-b3d4-a6b433ce0200" 00:22:04.365 ], 00:22:04.365 "product_name": "NVMe disk", 00:22:04.365 "block_size": 512, 00:22:04.365 "num_blocks": 2097152, 00:22:04.365 "uuid": "8b49cc38-67bf-4357-b3d4-a6b433ce0200", 00:22:04.365 "assigned_rate_limits": { 00:22:04.365 "rw_ios_per_sec": 0, 00:22:04.365 "rw_mbytes_per_sec": 0, 00:22:04.365 "r_mbytes_per_sec": 0, 00:22:04.365 "w_mbytes_per_sec": 0 00:22:04.365 }, 00:22:04.365 "claimed": false, 00:22:04.365 "zoned": false, 00:22:04.365 "supported_io_types": { 00:22:04.365 "read": true, 00:22:04.365 "write": true, 00:22:04.365 "unmap": false, 00:22:04.365 "flush": true, 00:22:04.365 "reset": true, 00:22:04.365 "nvme_admin": true, 00:22:04.365 "nvme_io": true, 00:22:04.365 "nvme_io_md": false, 00:22:04.365 "write_zeroes": true, 00:22:04.365 "zcopy": false, 00:22:04.365 "get_zone_info": false, 00:22:04.365 "zone_management": false, 00:22:04.365 "zone_append": false, 00:22:04.365 "compare": true, 00:22:04.365 "compare_and_write": true, 00:22:04.365 "abort": true, 00:22:04.365 "seek_hole": false, 00:22:04.365 "seek_data": false, 00:22:04.365 "copy": true, 00:22:04.365 "nvme_iov_md": false 00:22:04.365 }, 00:22:04.365 "memory_domains": [ 00:22:04.365 { 00:22:04.365 "dma_device_id": "system", 00:22:04.365 "dma_device_type": 1 00:22:04.365 } 00:22:04.365 ], 00:22:04.365 "driver_specific": { 00:22:04.365 "nvme": [ 00:22:04.365 { 00:22:04.365 "trid": { 00:22:04.365 "trtype": "TCP", 00:22:04.365 "adrfam": "IPv4", 00:22:04.365 "traddr": "10.0.0.2", 00:22:04.365 "trsvcid": "4420", 00:22:04.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.365 }, 00:22:04.365 "ctrlr_data": { 00:22:04.365 "cntlid": 2, 00:22:04.365 "vendor_id": "0x8086", 00:22:04.365 "model_number": "SPDK bdev Controller", 00:22:04.365 "serial_number": "00000000000000000000", 00:22:04.365 "firmware_revision": "24.09", 00:22:04.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.365 "oacs": { 00:22:04.365 "security": 0, 00:22:04.365 "format": 0, 00:22:04.365 "firmware": 0, 00:22:04.365 "ns_manage": 0 00:22:04.365 }, 00:22:04.365 "multi_ctrlr": true, 00:22:04.365 "ana_reporting": false 00:22:04.365 }, 00:22:04.365 "vs": { 00:22:04.365 "nvme_version": "1.3" 00:22:04.365 }, 00:22:04.365 "ns_data": { 00:22:04.365 "id": 1, 00:22:04.365 "can_share": true 00:22:04.365 } 00:22:04.365 } 00:22:04.365 ], 00:22:04.365 "mp_policy": "active_passive" 00:22:04.365 } 00:22:04.365 } 00:22:04.365 ] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.va8NIEk9EM 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.va8NIEk9EM 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [2024-07-15 21:59:58.423507] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.365 [2024-07-15 21:59:58.423605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.va8NIEk9EM 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [2024-07-15 21:59:58.431521] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.va8NIEk9EM 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [2024-07-15 21:59:58.439557] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.365 [2024-07-15 21:59:58.439591] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.365 nvme0n1 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [ 00:22:04.365 { 00:22:04.365 "name": "nvme0n1", 00:22:04.365 "aliases": [ 00:22:04.365 "8b49cc38-67bf-4357-b3d4-a6b433ce0200" 00:22:04.365 ], 00:22:04.365 "product_name": "NVMe disk", 00:22:04.365 "block_size": 512, 00:22:04.365 "num_blocks": 2097152, 00:22:04.365 "uuid": "8b49cc38-67bf-4357-b3d4-a6b433ce0200", 00:22:04.365 "assigned_rate_limits": { 00:22:04.365 "rw_ios_per_sec": 0, 00:22:04.365 "rw_mbytes_per_sec": 0, 00:22:04.365 "r_mbytes_per_sec": 0, 00:22:04.365 "w_mbytes_per_sec": 0 00:22:04.365 }, 00:22:04.365 "claimed": false, 00:22:04.365 "zoned": false, 00:22:04.365 "supported_io_types": { 00:22:04.365 "read": true, 00:22:04.365 "write": true, 00:22:04.365 "unmap": false, 00:22:04.365 "flush": true, 00:22:04.365 "reset": true, 00:22:04.365 "nvme_admin": true, 00:22:04.365 "nvme_io": true, 00:22:04.365 "nvme_io_md": false, 00:22:04.365 "write_zeroes": true, 00:22:04.365 "zcopy": false, 00:22:04.365 "get_zone_info": false, 00:22:04.365 "zone_management": false, 00:22:04.365 "zone_append": false, 00:22:04.365 "compare": true, 00:22:04.365 "compare_and_write": true, 00:22:04.365 "abort": true, 00:22:04.365 "seek_hole": false, 00:22:04.365 "seek_data": false, 00:22:04.365 "copy": true, 00:22:04.365 "nvme_iov_md": false 00:22:04.365 }, 00:22:04.365 "memory_domains": [ 00:22:04.365 { 00:22:04.365 "dma_device_id": "system", 00:22:04.365 "dma_device_type": 1 00:22:04.365 } 00:22:04.365 ], 00:22:04.365 "driver_specific": { 00:22:04.365 "nvme": [ 00:22:04.365 { 00:22:04.365 "trid": { 00:22:04.365 "trtype": "TCP", 00:22:04.365 "adrfam": "IPv4", 00:22:04.365 "traddr": "10.0.0.2", 00:22:04.365 "trsvcid": "4421", 00:22:04.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:04.365 }, 00:22:04.365 "ctrlr_data": { 00:22:04.365 "cntlid": 3, 00:22:04.365 "vendor_id": "0x8086", 00:22:04.365 "model_number": "SPDK bdev Controller", 00:22:04.365 "serial_number": "00000000000000000000", 00:22:04.365 "firmware_revision": "24.09", 00:22:04.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.365 "oacs": { 00:22:04.365 "security": 0, 00:22:04.365 "format": 0, 00:22:04.365 "firmware": 0, 00:22:04.365 "ns_manage": 0 00:22:04.365 }, 00:22:04.365 "multi_ctrlr": true, 00:22:04.365 "ana_reporting": false 00:22:04.365 }, 00:22:04.365 "vs": { 00:22:04.365 "nvme_version": "1.3" 00:22:04.365 }, 00:22:04.365 "ns_data": { 00:22:04.365 "id": 1, 00:22:04.365 "can_share": true 00:22:04.365 } 00:22:04.365 } 00:22:04.365 ], 00:22:04.365 "mp_policy": "active_passive" 00:22:04.365 } 00:22:04.365 } 00:22:04.365 ] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.365 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.va8NIEk9EM 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.366 rmmod nvme_tcp 00:22:04.366 rmmod nvme_fabrics 00:22:04.366 rmmod nvme_keyring 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3770256 ']' 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3770256 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3770256 ']' 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3770256 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.366 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3770256 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3770256' 00:22:04.624 killing process with pid 3770256 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3770256 00:22:04.624 [2024-07-15 21:59:58.636071] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.624 [2024-07-15 21:59:58.636098] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3770256 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.624 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.625 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.625 21:59:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.625 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.625 21:59:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.154 22:00:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:07.154 00:22:07.154 real 0m8.861s 00:22:07.154 user 0m3.272s 00:22:07.155 sys 0m4.065s 00:22:07.155 22:00:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.155 22:00:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:07.155 ************************************ 00:22:07.155 END TEST nvmf_async_init 00:22:07.155 ************************************ 00:22:07.155 22:00:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:07.155 22:00:00 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:07.155 22:00:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:07.155 22:00:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.155 22:00:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.155 ************************************ 00:22:07.155 START TEST dma 00:22:07.155 ************************************ 00:22:07.155 22:00:00 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:07.155 * Looking for test storage... 00:22:07.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.155 22:00:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.155 22:00:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.155 22:00:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.155 22:00:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.155 22:00:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:07.155 22:00:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.155 22:00:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.155 22:00:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:07.155 22:00:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:07.155 00:22:07.155 real 0m0.116s 00:22:07.155 user 0m0.057s 00:22:07.155 sys 0m0.067s 00:22:07.155 22:00:01 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.155 22:00:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:07.155 ************************************ 00:22:07.155 END TEST dma 00:22:07.155 ************************************ 00:22:07.155 22:00:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:07.155 22:00:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:07.155 22:00:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:07.155 22:00:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.155 22:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.155 ************************************ 00:22:07.155 START TEST nvmf_identify 00:22:07.155 ************************************ 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:07.155 * Looking for test storage... 00:22:07.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.155 22:00:01 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.156 22:00:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.433 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.434 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.434 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.434 22:00:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:22:12.434 00:22:12.434 --- 10.0.0.2 ping statistics --- 00:22:12.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.434 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:12.434 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:22:12.434 00:22:12.434 --- 10.0.0.1 ping statistics --- 00:22:12.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.435 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3774182 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3774182 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3774182 ']' 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.435 22:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.435 [2024-07-15 22:00:06.292040] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:12.435 [2024-07-15 22:00:06.292086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.435 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.435 [2024-07-15 22:00:06.352230] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.435 [2024-07-15 22:00:06.434412] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.435 [2024-07-15 22:00:06.434451] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.435 [2024-07-15 22:00:06.434457] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.435 [2024-07-15 22:00:06.434463] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.435 [2024-07-15 22:00:06.434468] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.435 [2024-07-15 22:00:06.434513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.435 [2024-07-15 22:00:06.434610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.435 [2024-07-15 22:00:06.434694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.435 [2024-07-15 22:00:06.434696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 [2024-07-15 22:00:07.107150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 Malloc0 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 [2024-07-15 22:00:07.195346] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.015 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 [ 00:22:13.015 { 00:22:13.015 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:13.015 "subtype": "Discovery", 00:22:13.015 "listen_addresses": [ 00:22:13.015 { 00:22:13.015 "trtype": "TCP", 00:22:13.015 "adrfam": "IPv4", 00:22:13.015 "traddr": "10.0.0.2", 00:22:13.015 "trsvcid": "4420" 00:22:13.015 } 00:22:13.015 ], 00:22:13.015 "allow_any_host": true, 00:22:13.015 "hosts": [] 00:22:13.015 }, 00:22:13.015 { 00:22:13.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.015 "subtype": "NVMe", 00:22:13.015 "listen_addresses": [ 00:22:13.015 { 00:22:13.015 "trtype": "TCP", 00:22:13.015 "adrfam": "IPv4", 00:22:13.015 "traddr": "10.0.0.2", 00:22:13.015 "trsvcid": "4420" 00:22:13.015 } 00:22:13.015 ], 00:22:13.015 "allow_any_host": true, 00:22:13.015 "hosts": [], 00:22:13.015 "serial_number": "SPDK00000000000001", 00:22:13.015 "model_number": "SPDK bdev Controller", 00:22:13.015 "max_namespaces": 32, 00:22:13.015 "min_cntlid": 1, 00:22:13.015 "max_cntlid": 65519, 00:22:13.015 "namespaces": [ 00:22:13.015 { 00:22:13.015 "nsid": 1, 00:22:13.015 "bdev_name": "Malloc0", 00:22:13.015 "name": "Malloc0", 00:22:13.015 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:13.015 "eui64": "ABCDEF0123456789", 00:22:13.015 "uuid": "2bc4eec2-4f0f-4684-9320-98788c22826f" 00:22:13.015 } 00:22:13.015 ] 00:22:13.015 } 00:22:13.015 ] 00:22:13.016 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.016 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:13.016 [2024-07-15 22:00:07.245507] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:13.016 [2024-07-15 22:00:07.245541] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774257 ] 00:22:13.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.277 [2024-07-15 22:00:07.276732] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:13.277 [2024-07-15 22:00:07.276775] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:13.277 [2024-07-15 22:00:07.276780] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:13.277 [2024-07-15 22:00:07.276791] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:13.277 [2024-07-15 22:00:07.276800] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:13.277 [2024-07-15 22:00:07.277057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:13.277 [2024-07-15 22:00:07.277089] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8a1ec0 0 00:22:13.277 [2024-07-15 22:00:07.291236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:13.277 [2024-07-15 22:00:07.291249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:13.277 [2024-07-15 22:00:07.291256] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:13.277 [2024-07-15 22:00:07.291260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:13.277 [2024-07-15 22:00:07.291296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.291302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.291305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.277 [2024-07-15 22:00:07.291317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:13.277 [2024-07-15 22:00:07.291332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.277 [2024-07-15 22:00:07.299237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.277 [2024-07-15 22:00:07.299245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.277 [2024-07-15 22:00:07.299249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.277 [2024-07-15 22:00:07.299261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:13.277 [2024-07-15 22:00:07.299268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:13.277 [2024-07-15 22:00:07.299273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:13.277 [2024-07-15 22:00:07.299288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.277 [2024-07-15 22:00:07.299301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.277 [2024-07-15 22:00:07.299314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.277 [2024-07-15 22:00:07.299418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.277 [2024-07-15 22:00:07.299424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.277 [2024-07-15 22:00:07.299427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.277 [2024-07-15 22:00:07.299438] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:13.277 [2024-07-15 22:00:07.299444] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:13.277 [2024-07-15 22:00:07.299451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.277 [2024-07-15 22:00:07.299457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.277 [2024-07-15 22:00:07.299463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.299474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.299555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.299561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.299564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.299572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:13.278 [2024-07-15 22:00:07.299579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.299585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.299597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.299606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.299682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.299687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.299691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.299699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.299707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.299719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.299728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.299804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.299810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.299813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.299820] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:13.278 [2024-07-15 22:00:07.299825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.299831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.299936] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:13.278 [2024-07-15 22:00:07.299940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.299947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.299954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.299961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.299972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.300046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.300051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.300054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.300062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:13.278 [2024-07-15 22:00:07.300070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.300082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.300090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.300162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.300168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.300171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.300178] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:13.278 [2024-07-15 22:00:07.300182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:13.278 [2024-07-15 22:00:07.300188] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:13.278 [2024-07-15 22:00:07.300196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:13.278 [2024-07-15 22:00:07.300205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.300214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.278 [2024-07-15 22:00:07.300233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.300358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.278 [2024-07-15 22:00:07.300364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.278 [2024-07-15 22:00:07.300367] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300370] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a1ec0): datao=0, datal=4096, cccid=0 00:22:13.278 [2024-07-15 22:00:07.300374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x924fc0) on tqpair(0x8a1ec0): expected_datao=0, payload_size=4096 00:22:13.278 [2024-07-15 22:00:07.300378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300407] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.300411] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.341321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.341324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.341335] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:13.278 [2024-07-15 22:00:07.341340] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:13.278 [2024-07-15 22:00:07.341344] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:13.278 [2024-07-15 22:00:07.341349] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:13.278 [2024-07-15 22:00:07.341353] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:13.278 [2024-07-15 22:00:07.341357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:13.278 [2024-07-15 22:00:07.341365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:13.278 [2024-07-15 22:00:07.341372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.341385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.278 [2024-07-15 22:00:07.341397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.278 [2024-07-15 22:00:07.341476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.278 [2024-07-15 22:00:07.341482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.278 [2024-07-15 22:00:07.341485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.278 [2024-07-15 22:00:07.341494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.341506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.278 [2024-07-15 22:00:07.341511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8a1ec0) 00:22:13.278 [2024-07-15 22:00:07.341523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.278 [2024-07-15 22:00:07.341528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.278 [2024-07-15 22:00:07.341534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.341539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.279 [2024-07-15 22:00:07.341544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.341557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.279 [2024-07-15 22:00:07.341562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:13.279 [2024-07-15 22:00:07.341572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:13.279 [2024-07-15 22:00:07.341578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.341587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.279 [2024-07-15 22:00:07.341598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x924fc0, cid 0, qid 0 00:22:13.279 [2024-07-15 22:00:07.341603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925140, cid 1, qid 0 00:22:13.279 [2024-07-15 22:00:07.341607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9252c0, cid 2, qid 0 00:22:13.279 [2024-07-15 22:00:07.341611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.279 [2024-07-15 22:00:07.341615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9255c0, cid 4, qid 0 00:22:13.279 [2024-07-15 22:00:07.341727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.341733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.341736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9255c0) on tqpair=0x8a1ec0 00:22:13.279 [2024-07-15 22:00:07.341744] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:13.279 [2024-07-15 22:00:07.341750] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:13.279 [2024-07-15 22:00:07.341759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.341769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.279 [2024-07-15 22:00:07.341779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9255c0, cid 4, qid 0 00:22:13.279 [2024-07-15 22:00:07.341865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.279 [2024-07-15 22:00:07.341872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.279 [2024-07-15 22:00:07.341875] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341878] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a1ec0): datao=0, datal=4096, cccid=4 00:22:13.279 [2024-07-15 22:00:07.341883] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9255c0) on tqpair(0x8a1ec0): expected_datao=0, payload_size=4096 00:22:13.279 [2024-07-15 22:00:07.341888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341918] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341923] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.341981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.341986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.341990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9255c0) on tqpair=0x8a1ec0 00:22:13.279 [2024-07-15 22:00:07.342003] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:13.279 [2024-07-15 22:00:07.342024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.342034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.279 [2024-07-15 22:00:07.342040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.342054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.279 [2024-07-15 22:00:07.342067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9255c0, cid 4, qid 0 00:22:13.279 [2024-07-15 22:00:07.342072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925740, cid 5, qid 0 00:22:13.279 [2024-07-15 22:00:07.342179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.279 [2024-07-15 22:00:07.342186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.279 [2024-07-15 22:00:07.342189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a1ec0): datao=0, datal=1024, cccid=4 00:22:13.279 [2024-07-15 22:00:07.342196] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9255c0) on tqpair(0x8a1ec0): expected_datao=0, payload_size=1024 00:22:13.279 [2024-07-15 22:00:07.342199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342208] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.342218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.342221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.342232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925740) on tqpair=0x8a1ec0 00:22:13.279 [2024-07-15 22:00:07.382381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.382394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.382397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.382401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9255c0) on tqpair=0x8a1ec0 00:22:13.279 [2024-07-15 22:00:07.382416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.382420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.382426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.279 [2024-07-15 22:00:07.382442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9255c0, cid 4, qid 0 00:22:13.279 [2024-07-15 22:00:07.382548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.279 [2024-07-15 22:00:07.382555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.279 [2024-07-15 22:00:07.382558] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.382561] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a1ec0): datao=0, datal=3072, cccid=4 00:22:13.279 [2024-07-15 22:00:07.382565] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9255c0) on tqpair(0x8a1ec0): expected_datao=0, payload_size=3072 00:22:13.279 [2024-07-15 22:00:07.382569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.382580] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.382583] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.427249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.427252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9255c0) on tqpair=0x8a1ec0 00:22:13.279 [2024-07-15 22:00:07.427266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a1ec0) 00:22:13.279 [2024-07-15 22:00:07.427276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.279 [2024-07-15 22:00:07.427292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9255c0, cid 4, qid 0 00:22:13.279 [2024-07-15 22:00:07.427468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.279 [2024-07-15 22:00:07.427474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.279 [2024-07-15 22:00:07.427477] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427480] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a1ec0): datao=0, datal=8, cccid=4 00:22:13.279 [2024-07-15 22:00:07.427484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9255c0) on tqpair(0x8a1ec0): expected_datao=0, payload_size=8 00:22:13.279 [2024-07-15 22:00:07.427489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427495] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.427498] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.469239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.279 [2024-07-15 22:00:07.469249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.279 [2024-07-15 22:00:07.469253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.279 [2024-07-15 22:00:07.469256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9255c0) on tqpair=0x8a1ec0 00:22:13.279 ===================================================== 00:22:13.279 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:13.279 ===================================================== 00:22:13.279 Controller Capabilities/Features 00:22:13.279 ================================ 00:22:13.279 Vendor ID: 0000 00:22:13.279 Subsystem Vendor ID: 0000 00:22:13.279 Serial Number: .................... 00:22:13.279 Model Number: ........................................ 00:22:13.279 Firmware Version: 24.09 00:22:13.279 Recommended Arb Burst: 0 00:22:13.279 IEEE OUI Identifier: 00 00 00 00:22:13.279 Multi-path I/O 00:22:13.279 May have multiple subsystem ports: No 00:22:13.279 May have multiple controllers: No 00:22:13.279 Associated with SR-IOV VF: No 00:22:13.279 Max Data Transfer Size: 131072 00:22:13.279 Max Number of Namespaces: 0 00:22:13.279 Max Number of I/O Queues: 1024 00:22:13.279 NVMe Specification Version (VS): 1.3 00:22:13.279 NVMe Specification Version (Identify): 1.3 00:22:13.279 Maximum Queue Entries: 128 00:22:13.279 Contiguous Queues Required: Yes 00:22:13.279 Arbitration Mechanisms Supported 00:22:13.279 Weighted Round Robin: Not Supported 00:22:13.279 Vendor Specific: Not Supported 00:22:13.279 Reset Timeout: 15000 ms 00:22:13.279 Doorbell Stride: 4 bytes 00:22:13.279 NVM Subsystem Reset: Not Supported 00:22:13.279 Command Sets Supported 00:22:13.279 NVM Command Set: Supported 00:22:13.280 Boot Partition: Not Supported 00:22:13.280 Memory Page Size Minimum: 4096 bytes 00:22:13.280 Memory Page Size Maximum: 4096 bytes 00:22:13.280 Persistent Memory Region: Not Supported 00:22:13.280 Optional Asynchronous Events Supported 00:22:13.280 Namespace Attribute Notices: Not Supported 00:22:13.280 Firmware Activation Notices: Not Supported 00:22:13.280 ANA Change Notices: Not Supported 00:22:13.280 PLE Aggregate Log Change Notices: Not Supported 00:22:13.280 LBA Status Info Alert Notices: Not Supported 00:22:13.280 EGE Aggregate Log Change Notices: Not Supported 00:22:13.280 Normal NVM Subsystem Shutdown event: Not Supported 00:22:13.280 Zone Descriptor Change Notices: Not Supported 00:22:13.280 Discovery Log Change Notices: Supported 00:22:13.280 Controller Attributes 00:22:13.280 128-bit Host Identifier: Not Supported 00:22:13.280 Non-Operational Permissive Mode: Not Supported 00:22:13.280 NVM Sets: Not Supported 00:22:13.280 Read Recovery Levels: Not Supported 00:22:13.280 Endurance Groups: Not Supported 00:22:13.280 Predictable Latency Mode: Not Supported 00:22:13.280 Traffic Based Keep ALive: Not Supported 00:22:13.280 Namespace Granularity: Not Supported 00:22:13.280 SQ Associations: Not Supported 00:22:13.280 UUID List: Not Supported 00:22:13.280 Multi-Domain Subsystem: Not Supported 00:22:13.280 Fixed Capacity Management: Not Supported 00:22:13.280 Variable Capacity Management: Not Supported 00:22:13.280 Delete Endurance Group: Not Supported 00:22:13.280 Delete NVM Set: Not Supported 00:22:13.280 Extended LBA Formats Supported: Not Supported 00:22:13.280 Flexible Data Placement Supported: Not Supported 00:22:13.280 00:22:13.280 Controller Memory Buffer Support 00:22:13.280 ================================ 00:22:13.280 Supported: No 00:22:13.280 00:22:13.280 Persistent Memory Region Support 00:22:13.280 ================================ 00:22:13.280 Supported: No 00:22:13.280 00:22:13.280 Admin Command Set Attributes 00:22:13.280 ============================ 00:22:13.280 Security Send/Receive: Not Supported 00:22:13.280 Format NVM: Not Supported 00:22:13.280 Firmware Activate/Download: Not Supported 00:22:13.280 Namespace Management: Not Supported 00:22:13.280 Device Self-Test: Not Supported 00:22:13.280 Directives: Not Supported 00:22:13.280 NVMe-MI: Not Supported 00:22:13.280 Virtualization Management: Not Supported 00:22:13.280 Doorbell Buffer Config: Not Supported 00:22:13.280 Get LBA Status Capability: Not Supported 00:22:13.280 Command & Feature Lockdown Capability: Not Supported 00:22:13.280 Abort Command Limit: 1 00:22:13.280 Async Event Request Limit: 4 00:22:13.280 Number of Firmware Slots: N/A 00:22:13.280 Firmware Slot 1 Read-Only: N/A 00:22:13.280 Firmware Activation Without Reset: N/A 00:22:13.280 Multiple Update Detection Support: N/A 00:22:13.280 Firmware Update Granularity: No Information Provided 00:22:13.280 Per-Namespace SMART Log: No 00:22:13.280 Asymmetric Namespace Access Log Page: Not Supported 00:22:13.280 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:13.280 Command Effects Log Page: Not Supported 00:22:13.280 Get Log Page Extended Data: Supported 00:22:13.280 Telemetry Log Pages: Not Supported 00:22:13.280 Persistent Event Log Pages: Not Supported 00:22:13.280 Supported Log Pages Log Page: May Support 00:22:13.280 Commands Supported & Effects Log Page: Not Supported 00:22:13.280 Feature Identifiers & Effects Log Page:May Support 00:22:13.280 NVMe-MI Commands & Effects Log Page: May Support 00:22:13.280 Data Area 4 for Telemetry Log: Not Supported 00:22:13.280 Error Log Page Entries Supported: 128 00:22:13.280 Keep Alive: Not Supported 00:22:13.280 00:22:13.280 NVM Command Set Attributes 00:22:13.280 ========================== 00:22:13.280 Submission Queue Entry Size 00:22:13.280 Max: 1 00:22:13.280 Min: 1 00:22:13.280 Completion Queue Entry Size 00:22:13.280 Max: 1 00:22:13.280 Min: 1 00:22:13.280 Number of Namespaces: 0 00:22:13.280 Compare Command: Not Supported 00:22:13.280 Write Uncorrectable Command: Not Supported 00:22:13.280 Dataset Management Command: Not Supported 00:22:13.280 Write Zeroes Command: Not Supported 00:22:13.280 Set Features Save Field: Not Supported 00:22:13.280 Reservations: Not Supported 00:22:13.280 Timestamp: Not Supported 00:22:13.280 Copy: Not Supported 00:22:13.280 Volatile Write Cache: Not Present 00:22:13.280 Atomic Write Unit (Normal): 1 00:22:13.280 Atomic Write Unit (PFail): 1 00:22:13.280 Atomic Compare & Write Unit: 1 00:22:13.280 Fused Compare & Write: Supported 00:22:13.280 Scatter-Gather List 00:22:13.280 SGL Command Set: Supported 00:22:13.280 SGL Keyed: Supported 00:22:13.280 SGL Bit Bucket Descriptor: Not Supported 00:22:13.280 SGL Metadata Pointer: Not Supported 00:22:13.280 Oversized SGL: Not Supported 00:22:13.280 SGL Metadata Address: Not Supported 00:22:13.280 SGL Offset: Supported 00:22:13.280 Transport SGL Data Block: Not Supported 00:22:13.280 Replay Protected Memory Block: Not Supported 00:22:13.280 00:22:13.280 Firmware Slot Information 00:22:13.280 ========================= 00:22:13.280 Active slot: 0 00:22:13.280 00:22:13.280 00:22:13.280 Error Log 00:22:13.280 ========= 00:22:13.280 00:22:13.280 Active Namespaces 00:22:13.280 ================= 00:22:13.280 Discovery Log Page 00:22:13.280 ================== 00:22:13.280 Generation Counter: 2 00:22:13.280 Number of Records: 2 00:22:13.280 Record Format: 0 00:22:13.280 00:22:13.280 Discovery Log Entry 0 00:22:13.280 ---------------------- 00:22:13.280 Transport Type: 3 (TCP) 00:22:13.280 Address Family: 1 (IPv4) 00:22:13.280 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:13.280 Entry Flags: 00:22:13.280 Duplicate Returned Information: 1 00:22:13.280 Explicit Persistent Connection Support for Discovery: 1 00:22:13.280 Transport Requirements: 00:22:13.280 Secure Channel: Not Required 00:22:13.280 Port ID: 0 (0x0000) 00:22:13.280 Controller ID: 65535 (0xffff) 00:22:13.280 Admin Max SQ Size: 128 00:22:13.280 Transport Service Identifier: 4420 00:22:13.280 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:13.280 Transport Address: 10.0.0.2 00:22:13.280 Discovery Log Entry 1 00:22:13.280 ---------------------- 00:22:13.280 Transport Type: 3 (TCP) 00:22:13.280 Address Family: 1 (IPv4) 00:22:13.280 Subsystem Type: 2 (NVM Subsystem) 00:22:13.280 Entry Flags: 00:22:13.280 Duplicate Returned Information: 0 00:22:13.280 Explicit Persistent Connection Support for Discovery: 0 00:22:13.280 Transport Requirements: 00:22:13.280 Secure Channel: Not Required 00:22:13.280 Port ID: 0 (0x0000) 00:22:13.280 Controller ID: 65535 (0xffff) 00:22:13.280 Admin Max SQ Size: 128 00:22:13.280 Transport Service Identifier: 4420 00:22:13.280 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:13.280 Transport Address: 10.0.0.2 [2024-07-15 22:00:07.469335] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:13.280 [2024-07-15 22:00:07.469344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x924fc0) on tqpair=0x8a1ec0 00:22:13.280 [2024-07-15 22:00:07.469350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.280 [2024-07-15 22:00:07.469355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925140) on tqpair=0x8a1ec0 00:22:13.280 [2024-07-15 22:00:07.469359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.280 [2024-07-15 22:00:07.469363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9252c0) on tqpair=0x8a1ec0 00:22:13.280 [2024-07-15 22:00:07.469367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.280 [2024-07-15 22:00:07.469372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.280 [2024-07-15 22:00:07.469375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.280 [2024-07-15 22:00:07.469385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.280 [2024-07-15 22:00:07.469389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.280 [2024-07-15 22:00:07.469392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.280 [2024-07-15 22:00:07.469399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.280 [2024-07-15 22:00:07.469414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.280 [2024-07-15 22:00:07.469510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.280 [2024-07-15 22:00:07.469516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.280 [2024-07-15 22:00:07.469519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.280 [2024-07-15 22:00:07.469522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.280 [2024-07-15 22:00:07.469528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.280 [2024-07-15 22:00:07.469532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.280 [2024-07-15 22:00:07.469535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.280 [2024-07-15 22:00:07.469541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.280 [2024-07-15 22:00:07.469554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.280 [2024-07-15 22:00:07.469666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.469672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.469675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.469682] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:13.281 [2024-07-15 22:00:07.469686] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:13.281 [2024-07-15 22:00:07.469694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.469707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.469716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.469793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.469799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.469802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.469814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.469827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.469836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.469921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.469927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.469930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.469941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.469948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.469956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.469965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.470921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.470927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.470930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.470941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.470948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.470953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.470964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.281 [2024-07-15 22:00:07.471044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.281 [2024-07-15 22:00:07.471050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.281 [2024-07-15 22:00:07.471053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.471057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.281 [2024-07-15 22:00:07.471065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.471069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.281 [2024-07-15 22:00:07.471072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.281 [2024-07-15 22:00:07.471077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.281 [2024-07-15 22:00:07.471087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.282 [2024-07-15 22:00:07.471160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.282 [2024-07-15 22:00:07.471166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.282 [2024-07-15 22:00:07.471169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.282 [2024-07-15 22:00:07.471180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.282 [2024-07-15 22:00:07.471193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.282 [2024-07-15 22:00:07.471201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.282 [2024-07-15 22:00:07.471287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.282 [2024-07-15 22:00:07.471293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.282 [2024-07-15 22:00:07.471296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.282 [2024-07-15 22:00:07.471307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.282 [2024-07-15 22:00:07.471320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.282 [2024-07-15 22:00:07.471330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.282 [2024-07-15 22:00:07.471534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.282 [2024-07-15 22:00:07.471540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.282 [2024-07-15 22:00:07.471543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.282 [2024-07-15 22:00:07.471546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.471556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.471568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.471578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.471698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.471704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.471707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.471719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.471731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.471740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.471817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.471823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.471826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.471837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.471849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.471859] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.471936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.471941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.471944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.471955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.471962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.471968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.471977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.472889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.472895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.472898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.472909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.472916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.472921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.472930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.473010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.473015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.283 [2024-07-15 22:00:07.473019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.473023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.283 [2024-07-15 22:00:07.473030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.473034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.283 [2024-07-15 22:00:07.473037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.283 [2024-07-15 22:00:07.473043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.283 [2024-07-15 22:00:07.473052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.283 [2024-07-15 22:00:07.473132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.283 [2024-07-15 22:00:07.473138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.284 [2024-07-15 22:00:07.473141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.473144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.284 [2024-07-15 22:00:07.473153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.473157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.473160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.284 [2024-07-15 22:00:07.473166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.284 [2024-07-15 22:00:07.473175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.284 [2024-07-15 22:00:07.477237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.284 [2024-07-15 22:00:07.477246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.284 [2024-07-15 22:00:07.477249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.477253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.284 [2024-07-15 22:00:07.477261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.477265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.477268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a1ec0) 00:22:13.284 [2024-07-15 22:00:07.477274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.284 [2024-07-15 22:00:07.477286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x925440, cid 3, qid 0 00:22:13.284 [2024-07-15 22:00:07.477436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.284 [2024-07-15 22:00:07.477442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.284 [2024-07-15 22:00:07.477445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.284 [2024-07-15 22:00:07.477448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x925440) on tqpair=0x8a1ec0 00:22:13.284 [2024-07-15 22:00:07.477455] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:13.284 00:22:13.284 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:13.284 [2024-07-15 22:00:07.515346] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:13.284 [2024-07-15 22:00:07.515394] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774419 ] 00:22:13.545 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.545 [2024-07-15 22:00:07.545427] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:13.545 [2024-07-15 22:00:07.545463] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:13.545 [2024-07-15 22:00:07.545468] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:13.545 [2024-07-15 22:00:07.545478] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:13.545 [2024-07-15 22:00:07.545484] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:13.545 [2024-07-15 22:00:07.545834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:13.545 [2024-07-15 22:00:07.545856] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x130dec0 0 00:22:13.545 [2024-07-15 22:00:07.560233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:13.545 [2024-07-15 22:00:07.560248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:13.545 [2024-07-15 22:00:07.560254] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:13.545 [2024-07-15 22:00:07.560258] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:13.545 [2024-07-15 22:00:07.560286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.560291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.560294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.545 [2024-07-15 22:00:07.560305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:13.545 [2024-07-15 22:00:07.560321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.545 [2024-07-15 22:00:07.568237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.545 [2024-07-15 22:00:07.568245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.545 [2024-07-15 22:00:07.568248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.545 [2024-07-15 22:00:07.568262] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:13.545 [2024-07-15 22:00:07.568268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:13.545 [2024-07-15 22:00:07.568272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:13.545 [2024-07-15 22:00:07.568286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.545 [2024-07-15 22:00:07.568299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.545 [2024-07-15 22:00:07.568313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.545 [2024-07-15 22:00:07.568541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.545 [2024-07-15 22:00:07.568546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.545 [2024-07-15 22:00:07.568549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.545 [2024-07-15 22:00:07.568558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:13.545 [2024-07-15 22:00:07.568565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:13.545 [2024-07-15 22:00:07.568572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.545 [2024-07-15 22:00:07.568578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.545 [2024-07-15 22:00:07.568584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.545 [2024-07-15 22:00:07.568595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.545 [2024-07-15 22:00:07.568694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.568700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.568703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.568711] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:13.546 [2024-07-15 22:00:07.568722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.568728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.568740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.546 [2024-07-15 22:00:07.568751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.568835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.568841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.568844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.568852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.568860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.568872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.546 [2024-07-15 22:00:07.568882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.568984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.568989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.568992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.568995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.568999] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:13.546 [2024-07-15 22:00:07.569004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.569010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.569115] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:13.546 [2024-07-15 22:00:07.569118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.569124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.546 [2024-07-15 22:00:07.569147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.569222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.569235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.569238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.569247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:13.546 [2024-07-15 22:00:07.569256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.546 [2024-07-15 22:00:07.569278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.569377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.569382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.569385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.569392] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:13.546 [2024-07-15 22:00:07.569396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569403] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:13.546 [2024-07-15 22:00:07.569411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.546 [2024-07-15 22:00:07.569439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.569563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.546 [2024-07-15 22:00:07.569569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.546 [2024-07-15 22:00:07.569572] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569575] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=4096, cccid=0 00:22:13.546 [2024-07-15 22:00:07.569579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1390fc0) on tqpair(0x130dec0): expected_datao=0, payload_size=4096 00:22:13.546 [2024-07-15 22:00:07.569583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569589] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569593] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.569634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.569637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.569646] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:13.546 [2024-07-15 22:00:07.569650] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:13.546 [2024-07-15 22:00:07.569654] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:13.546 [2024-07-15 22:00:07.569660] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:13.546 [2024-07-15 22:00:07.569664] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:13.546 [2024-07-15 22:00:07.569668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.546 [2024-07-15 22:00:07.569704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.546 [2024-07-15 22:00:07.569830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.546 [2024-07-15 22:00:07.569835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.546 [2024-07-15 22:00:07.569838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.546 [2024-07-15 22:00:07.569847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.546 [2024-07-15 22:00:07.569864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.546 [2024-07-15 22:00:07.569880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.546 [2024-07-15 22:00:07.569896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.546 [2024-07-15 22:00:07.569911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:13.546 [2024-07-15 22:00:07.569926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.546 [2024-07-15 22:00:07.569930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.546 [2024-07-15 22:00:07.569937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.569948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1390fc0, cid 0, qid 0 00:22:13.547 [2024-07-15 22:00:07.569953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391140, cid 1, qid 0 00:22:13.547 [2024-07-15 22:00:07.569957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13912c0, cid 2, qid 0 00:22:13.547 [2024-07-15 22:00:07.569961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.547 [2024-07-15 22:00:07.569964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.570095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.570101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.570104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.570111] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:13.547 [2024-07-15 22:00:07.570116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.570146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:13.547 [2024-07-15 22:00:07.570156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.570237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.570243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.570246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.570305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.570330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.570339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.570452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.547 [2024-07-15 22:00:07.570458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.547 [2024-07-15 22:00:07.570461] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570464] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=4096, cccid=4 00:22:13.547 [2024-07-15 22:00:07.570470] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13915c0) on tqpair(0x130dec0): expected_datao=0, payload_size=4096 00:22:13.547 [2024-07-15 22:00:07.570474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570480] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.570512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.570515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.570526] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:13.547 [2024-07-15 22:00:07.570538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.570563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.570573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.570704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.547 [2024-07-15 22:00:07.570710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.547 [2024-07-15 22:00:07.570713] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=4096, cccid=4 00:22:13.547 [2024-07-15 22:00:07.570720] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13915c0) on tqpair(0x130dec0): expected_datao=0, payload_size=4096 00:22:13.547 [2024-07-15 22:00:07.570723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570729] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570732] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.570762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.570764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.570779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.570803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.570812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.570908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.547 [2024-07-15 22:00:07.570914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.547 [2024-07-15 22:00:07.570917] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=4096, cccid=4 00:22:13.547 [2024-07-15 22:00:07.570924] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13915c0) on tqpair(0x130dec0): expected_datao=0, payload_size=4096 00:22:13.547 [2024-07-15 22:00:07.570928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570933] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570936] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.570965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.570968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.570972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.570978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.570998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.571003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.571007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.571012] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:13.547 [2024-07-15 22:00:07.571016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:13.547 [2024-07-15 22:00:07.571020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:13.547 [2024-07-15 22:00:07.571032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.571041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.571047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.571058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.547 [2024-07-15 22:00:07.571070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.547 [2024-07-15 22:00:07.571075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391740, cid 5, qid 0 00:22:13.547 [2024-07-15 22:00:07.571251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.571257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.571259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.571271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.547 [2024-07-15 22:00:07.571275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.547 [2024-07-15 22:00:07.571278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391740) on tqpair=0x130dec0 00:22:13.547 [2024-07-15 22:00:07.571289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.547 [2024-07-15 22:00:07.571292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130dec0) 00:22:13.547 [2024-07-15 22:00:07.571298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.547 [2024-07-15 22:00:07.571308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391740, cid 5, qid 0 00:22:13.548 [2024-07-15 22:00:07.571407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.571412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.571415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391740) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.571426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391740, cid 5, qid 0 00:22:13.548 [2024-07-15 22:00:07.571556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.571562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.571565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391740) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.571575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391740, cid 5, qid 0 00:22:13.548 [2024-07-15 22:00:07.571667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.571672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.571675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391740) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.571691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x130dec0) 00:22:13.548 [2024-07-15 22:00:07.571745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.548 [2024-07-15 22:00:07.571756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391740, cid 5, qid 0 00:22:13.548 [2024-07-15 22:00:07.571760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13915c0, cid 4, qid 0 00:22:13.548 [2024-07-15 22:00:07.571764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13918c0, cid 6, qid 0 00:22:13.548 [2024-07-15 22:00:07.571768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391a40, cid 7, qid 0 00:22:13.548 [2024-07-15 22:00:07.571921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.548 [2024-07-15 22:00:07.571927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.548 [2024-07-15 22:00:07.571930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.571933] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=8192, cccid=5 00:22:13.548 [2024-07-15 22:00:07.571936] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1391740) on tqpair(0x130dec0): expected_datao=0, payload_size=8192 00:22:13.548 [2024-07-15 22:00:07.571940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572055] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572059] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.548 [2024-07-15 22:00:07.572069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.548 [2024-07-15 22:00:07.572072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572074] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=512, cccid=4 00:22:13.548 [2024-07-15 22:00:07.572078] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13915c0) on tqpair(0x130dec0): expected_datao=0, payload_size=512 00:22:13.548 [2024-07-15 22:00:07.572082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572087] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.548 [2024-07-15 22:00:07.572100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.548 [2024-07-15 22:00:07.572102] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572105] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=512, cccid=6 00:22:13.548 [2024-07-15 22:00:07.572109] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13918c0) on tqpair(0x130dec0): expected_datao=0, payload_size=512 00:22:13.548 [2024-07-15 22:00:07.572113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572118] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572121] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:13.548 [2024-07-15 22:00:07.572132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:13.548 [2024-07-15 22:00:07.572135] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572138] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130dec0): datao=0, datal=4096, cccid=7 00:22:13.548 [2024-07-15 22:00:07.572142] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1391a40) on tqpair(0x130dec0): expected_datao=0, payload_size=4096 00:22:13.548 [2024-07-15 22:00:07.572145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572151] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572154] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.572166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.572168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391740) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.572182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.572187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.572190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13915c0) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.572202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.572207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.572210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.572213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13918c0) on tqpair=0x130dec0 00:22:13.548 [2024-07-15 22:00:07.572219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.548 [2024-07-15 22:00:07.576230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.548 [2024-07-15 22:00:07.576235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.548 [2024-07-15 22:00:07.576238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391a40) on tqpair=0x130dec0 00:22:13.548 ===================================================== 00:22:13.548 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.548 ===================================================== 00:22:13.548 Controller Capabilities/Features 00:22:13.548 ================================ 00:22:13.548 Vendor ID: 8086 00:22:13.548 Subsystem Vendor ID: 8086 00:22:13.548 Serial Number: SPDK00000000000001 00:22:13.548 Model Number: SPDK bdev Controller 00:22:13.548 Firmware Version: 24.09 00:22:13.548 Recommended Arb Burst: 6 00:22:13.548 IEEE OUI Identifier: e4 d2 5c 00:22:13.548 Multi-path I/O 00:22:13.548 May have multiple subsystem ports: Yes 00:22:13.548 May have multiple controllers: Yes 00:22:13.548 Associated with SR-IOV VF: No 00:22:13.548 Max Data Transfer Size: 131072 00:22:13.548 Max Number of Namespaces: 32 00:22:13.548 Max Number of I/O Queues: 127 00:22:13.548 NVMe Specification Version (VS): 1.3 00:22:13.548 NVMe Specification Version (Identify): 1.3 00:22:13.548 Maximum Queue Entries: 128 00:22:13.548 Contiguous Queues Required: Yes 00:22:13.548 Arbitration Mechanisms Supported 00:22:13.548 Weighted Round Robin: Not Supported 00:22:13.548 Vendor Specific: Not Supported 00:22:13.548 Reset Timeout: 15000 ms 00:22:13.548 Doorbell Stride: 4 bytes 00:22:13.548 NVM Subsystem Reset: Not Supported 00:22:13.548 Command Sets Supported 00:22:13.548 NVM Command Set: Supported 00:22:13.548 Boot Partition: Not Supported 00:22:13.548 Memory Page Size Minimum: 4096 bytes 00:22:13.548 Memory Page Size Maximum: 4096 bytes 00:22:13.548 Persistent Memory Region: Not Supported 00:22:13.548 Optional Asynchronous Events Supported 00:22:13.548 Namespace Attribute Notices: Supported 00:22:13.548 Firmware Activation Notices: Not Supported 00:22:13.548 ANA Change Notices: Not Supported 00:22:13.548 PLE Aggregate Log Change Notices: Not Supported 00:22:13.548 LBA Status Info Alert Notices: Not Supported 00:22:13.548 EGE Aggregate Log Change Notices: Not Supported 00:22:13.548 Normal NVM Subsystem Shutdown event: Not Supported 00:22:13.548 Zone Descriptor Change Notices: Not Supported 00:22:13.548 Discovery Log Change Notices: Not Supported 00:22:13.548 Controller Attributes 00:22:13.548 128-bit Host Identifier: Supported 00:22:13.548 Non-Operational Permissive Mode: Not Supported 00:22:13.548 NVM Sets: Not Supported 00:22:13.548 Read Recovery Levels: Not Supported 00:22:13.548 Endurance Groups: Not Supported 00:22:13.549 Predictable Latency Mode: Not Supported 00:22:13.549 Traffic Based Keep ALive: Not Supported 00:22:13.549 Namespace Granularity: Not Supported 00:22:13.549 SQ Associations: Not Supported 00:22:13.549 UUID List: Not Supported 00:22:13.549 Multi-Domain Subsystem: Not Supported 00:22:13.549 Fixed Capacity Management: Not Supported 00:22:13.549 Variable Capacity Management: Not Supported 00:22:13.549 Delete Endurance Group: Not Supported 00:22:13.549 Delete NVM Set: Not Supported 00:22:13.549 Extended LBA Formats Supported: Not Supported 00:22:13.549 Flexible Data Placement Supported: Not Supported 00:22:13.549 00:22:13.549 Controller Memory Buffer Support 00:22:13.549 ================================ 00:22:13.549 Supported: No 00:22:13.549 00:22:13.549 Persistent Memory Region Support 00:22:13.549 ================================ 00:22:13.549 Supported: No 00:22:13.549 00:22:13.549 Admin Command Set Attributes 00:22:13.549 ============================ 00:22:13.549 Security Send/Receive: Not Supported 00:22:13.549 Format NVM: Not Supported 00:22:13.549 Firmware Activate/Download: Not Supported 00:22:13.549 Namespace Management: Not Supported 00:22:13.549 Device Self-Test: Not Supported 00:22:13.549 Directives: Not Supported 00:22:13.549 NVMe-MI: Not Supported 00:22:13.549 Virtualization Management: Not Supported 00:22:13.549 Doorbell Buffer Config: Not Supported 00:22:13.549 Get LBA Status Capability: Not Supported 00:22:13.549 Command & Feature Lockdown Capability: Not Supported 00:22:13.549 Abort Command Limit: 4 00:22:13.549 Async Event Request Limit: 4 00:22:13.549 Number of Firmware Slots: N/A 00:22:13.549 Firmware Slot 1 Read-Only: N/A 00:22:13.549 Firmware Activation Without Reset: N/A 00:22:13.549 Multiple Update Detection Support: N/A 00:22:13.549 Firmware Update Granularity: No Information Provided 00:22:13.549 Per-Namespace SMART Log: No 00:22:13.549 Asymmetric Namespace Access Log Page: Not Supported 00:22:13.549 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:13.549 Command Effects Log Page: Supported 00:22:13.549 Get Log Page Extended Data: Supported 00:22:13.549 Telemetry Log Pages: Not Supported 00:22:13.549 Persistent Event Log Pages: Not Supported 00:22:13.549 Supported Log Pages Log Page: May Support 00:22:13.549 Commands Supported & Effects Log Page: Not Supported 00:22:13.549 Feature Identifiers & Effects Log Page:May Support 00:22:13.549 NVMe-MI Commands & Effects Log Page: May Support 00:22:13.549 Data Area 4 for Telemetry Log: Not Supported 00:22:13.549 Error Log Page Entries Supported: 128 00:22:13.549 Keep Alive: Supported 00:22:13.549 Keep Alive Granularity: 10000 ms 00:22:13.549 00:22:13.549 NVM Command Set Attributes 00:22:13.549 ========================== 00:22:13.549 Submission Queue Entry Size 00:22:13.549 Max: 64 00:22:13.549 Min: 64 00:22:13.549 Completion Queue Entry Size 00:22:13.549 Max: 16 00:22:13.549 Min: 16 00:22:13.549 Number of Namespaces: 32 00:22:13.549 Compare Command: Supported 00:22:13.549 Write Uncorrectable Command: Not Supported 00:22:13.549 Dataset Management Command: Supported 00:22:13.549 Write Zeroes Command: Supported 00:22:13.549 Set Features Save Field: Not Supported 00:22:13.549 Reservations: Supported 00:22:13.549 Timestamp: Not Supported 00:22:13.549 Copy: Supported 00:22:13.549 Volatile Write Cache: Present 00:22:13.549 Atomic Write Unit (Normal): 1 00:22:13.549 Atomic Write Unit (PFail): 1 00:22:13.549 Atomic Compare & Write Unit: 1 00:22:13.549 Fused Compare & Write: Supported 00:22:13.549 Scatter-Gather List 00:22:13.549 SGL Command Set: Supported 00:22:13.549 SGL Keyed: Supported 00:22:13.549 SGL Bit Bucket Descriptor: Not Supported 00:22:13.549 SGL Metadata Pointer: Not Supported 00:22:13.549 Oversized SGL: Not Supported 00:22:13.549 SGL Metadata Address: Not Supported 00:22:13.549 SGL Offset: Supported 00:22:13.549 Transport SGL Data Block: Not Supported 00:22:13.549 Replay Protected Memory Block: Not Supported 00:22:13.549 00:22:13.549 Firmware Slot Information 00:22:13.549 ========================= 00:22:13.549 Active slot: 1 00:22:13.549 Slot 1 Firmware Revision: 24.09 00:22:13.549 00:22:13.549 00:22:13.549 Commands Supported and Effects 00:22:13.549 ============================== 00:22:13.549 Admin Commands 00:22:13.549 -------------- 00:22:13.549 Get Log Page (02h): Supported 00:22:13.549 Identify (06h): Supported 00:22:13.549 Abort (08h): Supported 00:22:13.549 Set Features (09h): Supported 00:22:13.549 Get Features (0Ah): Supported 00:22:13.549 Asynchronous Event Request (0Ch): Supported 00:22:13.549 Keep Alive (18h): Supported 00:22:13.549 I/O Commands 00:22:13.549 ------------ 00:22:13.549 Flush (00h): Supported LBA-Change 00:22:13.549 Write (01h): Supported LBA-Change 00:22:13.549 Read (02h): Supported 00:22:13.549 Compare (05h): Supported 00:22:13.549 Write Zeroes (08h): Supported LBA-Change 00:22:13.549 Dataset Management (09h): Supported LBA-Change 00:22:13.549 Copy (19h): Supported LBA-Change 00:22:13.549 00:22:13.549 Error Log 00:22:13.549 ========= 00:22:13.549 00:22:13.549 Arbitration 00:22:13.549 =========== 00:22:13.549 Arbitration Burst: 1 00:22:13.549 00:22:13.549 Power Management 00:22:13.549 ================ 00:22:13.549 Number of Power States: 1 00:22:13.549 Current Power State: Power State #0 00:22:13.549 Power State #0: 00:22:13.549 Max Power: 0.00 W 00:22:13.549 Non-Operational State: Operational 00:22:13.549 Entry Latency: Not Reported 00:22:13.549 Exit Latency: Not Reported 00:22:13.549 Relative Read Throughput: 0 00:22:13.549 Relative Read Latency: 0 00:22:13.549 Relative Write Throughput: 0 00:22:13.549 Relative Write Latency: 0 00:22:13.549 Idle Power: Not Reported 00:22:13.549 Active Power: Not Reported 00:22:13.549 Non-Operational Permissive Mode: Not Supported 00:22:13.549 00:22:13.549 Health Information 00:22:13.549 ================== 00:22:13.549 Critical Warnings: 00:22:13.549 Available Spare Space: OK 00:22:13.549 Temperature: OK 00:22:13.549 Device Reliability: OK 00:22:13.549 Read Only: No 00:22:13.549 Volatile Memory Backup: OK 00:22:13.549 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:13.549 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:13.549 Available Spare: 0% 00:22:13.549 Available Spare Threshold: 0% 00:22:13.549 Life Percentage Used:[2024-07-15 22:00:07.576321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.549 [2024-07-15 22:00:07.576326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x130dec0) 00:22:13.549 [2024-07-15 22:00:07.576332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.549 [2024-07-15 22:00:07.576345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391a40, cid 7, qid 0 00:22:13.549 [2024-07-15 22:00:07.576528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.549 [2024-07-15 22:00:07.576534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.549 [2024-07-15 22:00:07.576537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.549 [2024-07-15 22:00:07.576540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391a40) on tqpair=0x130dec0 00:22:13.549 [2024-07-15 22:00:07.576568] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:13.549 [2024-07-15 22:00:07.576577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1390fc0) on tqpair=0x130dec0 00:22:13.549 [2024-07-15 22:00:07.576582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.549 [2024-07-15 22:00:07.576587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391140) on tqpair=0x130dec0 00:22:13.549 [2024-07-15 22:00:07.576591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.549 [2024-07-15 22:00:07.576595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13912c0) on tqpair=0x130dec0 00:22:13.549 [2024-07-15 22:00:07.576601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.549 [2024-07-15 22:00:07.576605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.576609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.550 [2024-07-15 22:00:07.576616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.576628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.576641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.576736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.576742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.576745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.576754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.576767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.576780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.576878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.576884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.576887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.576893] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:13.550 [2024-07-15 22:00:07.576897] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:13.550 [2024-07-15 22:00:07.576905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.576912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.576917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.576927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.577854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.577861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.577866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.577875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.577989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.577995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.577998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.578009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.578021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.578030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.578137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.578142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.578145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.578156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.550 [2024-07-15 22:00:07.578168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.550 [2024-07-15 22:00:07.578177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.550 [2024-07-15 22:00:07.578259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.550 [2024-07-15 22:00:07.578265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.550 [2024-07-15 22:00:07.578269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.550 [2024-07-15 22:00:07.578281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.550 [2024-07-15 22:00:07.578284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.578293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.578302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.578389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.578394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.578397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.578409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.578421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.578430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.578543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.578548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.578551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.578563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.578575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.578583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.578748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.578754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.578756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.578769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.578781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.578790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.578867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.578875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.578878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.578890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.578896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.578902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.578911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.578996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.579015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.579027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.579037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.579146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.579165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.579177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.579186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.579299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.579319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.579331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.579341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.579415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.579438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.579450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.579459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.579549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.551 [2024-07-15 22:00:07.579569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.551 [2024-07-15 22:00:07.579581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.551 [2024-07-15 22:00:07.579590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.551 [2024-07-15 22:00:07.579702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.551 [2024-07-15 22:00:07.579707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.551 [2024-07-15 22:00:07.579710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.551 [2024-07-15 22:00:07.579714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.552 [2024-07-15 22:00:07.579721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.579725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.579728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.552 [2024-07-15 22:00:07.579734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.552 [2024-07-15 22:00:07.579742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.552 [2024-07-15 22:00:07.579894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.552 [2024-07-15 22:00:07.579899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.552 [2024-07-15 22:00:07.579902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.579905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.552 [2024-07-15 22:00:07.579914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.579917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.579920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.552 [2024-07-15 22:00:07.579926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.552 [2024-07-15 22:00:07.579936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.552 [2024-07-15 22:00:07.580016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.552 [2024-07-15 22:00:07.580021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.552 [2024-07-15 22:00:07.580024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.580029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.552 [2024-07-15 22:00:07.580038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.580042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.580045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.552 [2024-07-15 22:00:07.580050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.552 [2024-07-15 22:00:07.580059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.552 [2024-07-15 22:00:07.583234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.552 [2024-07-15 22:00:07.583244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.552 [2024-07-15 22:00:07.583247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.583250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.552 [2024-07-15 22:00:07.583261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.583265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.583268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130dec0) 00:22:13.552 [2024-07-15 22:00:07.583274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.552 [2024-07-15 22:00:07.583285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1391440, cid 3, qid 0 00:22:13.552 [2024-07-15 22:00:07.583483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:13.552 [2024-07-15 22:00:07.583488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:13.552 [2024-07-15 22:00:07.583491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:13.552 [2024-07-15 22:00:07.583494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1391440) on tqpair=0x130dec0 00:22:13.552 [2024-07-15 22:00:07.583501] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:13.552 0% 00:22:13.552 Data Units Read: 0 00:22:13.552 Data Units Written: 0 00:22:13.552 Host Read Commands: 0 00:22:13.552 Host Write Commands: 0 00:22:13.552 Controller Busy Time: 0 minutes 00:22:13.552 Power Cycles: 0 00:22:13.552 Power On Hours: 0 hours 00:22:13.552 Unsafe Shutdowns: 0 00:22:13.552 Unrecoverable Media Errors: 0 00:22:13.552 Lifetime Error Log Entries: 0 00:22:13.552 Warning Temperature Time: 0 minutes 00:22:13.552 Critical Temperature Time: 0 minutes 00:22:13.552 00:22:13.552 Number of Queues 00:22:13.552 ================ 00:22:13.552 Number of I/O Submission Queues: 127 00:22:13.552 Number of I/O Completion Queues: 127 00:22:13.552 00:22:13.552 Active Namespaces 00:22:13.552 ================= 00:22:13.552 Namespace ID:1 00:22:13.552 Error Recovery Timeout: Unlimited 00:22:13.552 Command Set Identifier: NVM (00h) 00:22:13.552 Deallocate: Supported 00:22:13.552 Deallocated/Unwritten Error: Not Supported 00:22:13.552 Deallocated Read Value: Unknown 00:22:13.552 Deallocate in Write Zeroes: Not Supported 00:22:13.552 Deallocated Guard Field: 0xFFFF 00:22:13.552 Flush: Supported 00:22:13.552 Reservation: Supported 00:22:13.552 Namespace Sharing Capabilities: Multiple Controllers 00:22:13.552 Size (in LBAs): 131072 (0GiB) 00:22:13.552 Capacity (in LBAs): 131072 (0GiB) 00:22:13.552 Utilization (in LBAs): 131072 (0GiB) 00:22:13.552 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:13.552 EUI64: ABCDEF0123456789 00:22:13.552 UUID: 2bc4eec2-4f0f-4684-9320-98788c22826f 00:22:13.552 Thin Provisioning: Not Supported 00:22:13.552 Per-NS Atomic Units: Yes 00:22:13.552 Atomic Boundary Size (Normal): 0 00:22:13.552 Atomic Boundary Size (PFail): 0 00:22:13.552 Atomic Boundary Offset: 0 00:22:13.552 Maximum Single Source Range Length: 65535 00:22:13.552 Maximum Copy Length: 65535 00:22:13.552 Maximum Source Range Count: 1 00:22:13.552 NGUID/EUI64 Never Reused: No 00:22:13.552 Namespace Write Protected: No 00:22:13.552 Number of LBA Formats: 1 00:22:13.552 Current LBA Format: LBA Format #00 00:22:13.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:13.552 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.552 rmmod nvme_tcp 00:22:13.552 rmmod nvme_fabrics 00:22:13.552 rmmod nvme_keyring 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3774182 ']' 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3774182 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3774182 ']' 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3774182 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774182 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774182' 00:22:13.552 killing process with pid 3774182 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3774182 00:22:13.552 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3774182 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.810 22:00:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.742 22:00:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.742 00:22:15.742 real 0m8.867s 00:22:15.742 user 0m7.241s 00:22:15.742 sys 0m4.149s 00:22:15.742 22:00:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.742 22:00:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.742 ************************************ 00:22:15.742 END TEST nvmf_identify 00:22:15.742 ************************************ 00:22:16.001 22:00:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:16.001 22:00:10 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:16.001 22:00:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.001 22:00:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.001 22:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.001 ************************************ 00:22:16.001 START TEST nvmf_perf 00:22:16.001 ************************************ 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:16.001 * Looking for test storage... 00:22:16.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.001 22:00:10 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.002 22:00:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.287 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.287 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.287 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:21.546 00:22:21.546 --- 10.0.0.2 ping statistics --- 00:22:21.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.546 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:22:21.546 00:22:21.546 --- 10.0.0.1 ping statistics --- 00:22:21.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.546 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3778121 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3778121 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3778121 ']' 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.546 22:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.546 [2024-07-15 22:00:15.634143] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:21.546 [2024-07-15 22:00:15.634187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.546 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.546 [2024-07-15 22:00:15.691976] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.546 [2024-07-15 22:00:15.773032] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.546 [2024-07-15 22:00:15.773067] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.546 [2024-07-15 22:00:15.773074] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.547 [2024-07-15 22:00:15.773080] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.547 [2024-07-15 22:00:15.773087] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.547 [2024-07-15 22:00:15.773135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.547 [2024-07-15 22:00:15.773237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.547 [2024-07-15 22:00:15.773293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.547 [2024-07-15 22:00:15.773295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:22.479 22:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:25.757 22:00:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:26.015 [2024-07-15 22:00:20.031967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.015 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.273 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:26.273 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.273 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:26.273 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:26.532 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.790 [2024-07-15 22:00:20.774713] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.790 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:26.790 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:26.790 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:26.790 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:26.790 22:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:28.167 Initializing NVMe Controllers 00:22:28.167 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:28.167 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:28.167 Initialization complete. Launching workers. 00:22:28.167 ======================================================== 00:22:28.167 Latency(us) 00:22:28.167 Device Information : IOPS MiB/s Average min max 00:22:28.167 PCIE (0000:5e:00.0) NSID 1 from core 0: 98469.58 384.65 324.86 24.18 6209.84 00:22:28.167 ======================================================== 00:22:28.167 Total : 98469.58 384.65 324.86 24.18 6209.84 00:22:28.167 00:22:28.167 22:00:22 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.167 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.542 Initializing NVMe Controllers 00:22:29.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.542 Initialization complete. Launching workers. 00:22:29.542 ======================================================== 00:22:29.542 Latency(us) 00:22:29.542 Device Information : IOPS MiB/s Average min max 00:22:29.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 132.00 0.52 7642.08 156.42 45033.11 00:22:29.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16483.73 5986.87 47885.86 00:22:29.542 ======================================================== 00:22:29.542 Total : 193.00 0.75 10436.59 156.42 47885.86 00:22:29.542 00:22:29.542 22:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.542 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.916 Initializing NVMe Controllers 00:22:30.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.917 Initialization complete. Launching workers. 00:22:30.917 ======================================================== 00:22:30.917 Latency(us) 00:22:30.917 Device Information : IOPS MiB/s Average min max 00:22:30.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10514.99 41.07 3043.89 436.56 6275.75 00:22:30.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.25 15.14 8271.11 5345.78 15971.47 00:22:30.917 ======================================================== 00:22:30.917 Total : 14390.25 56.21 4451.57 436.56 15971.47 00:22:30.917 00:22:30.917 22:00:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:30.917 22:00:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:30.917 22:00:24 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.917 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.473 Initializing NVMe Controllers 00:22:33.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.473 Controller IO queue size 128, less than required. 00:22:33.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.473 Controller IO queue size 128, less than required. 00:22:33.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:33.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:33.474 Initialization complete. Launching workers. 00:22:33.474 ======================================================== 00:22:33.474 Latency(us) 00:22:33.474 Device Information : IOPS MiB/s Average min max 00:22:33.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1119.47 279.87 117869.65 68009.86 152688.20 00:22:33.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 622.99 155.75 216243.82 70666.11 309969.77 00:22:33.474 ======================================================== 00:22:33.474 Total : 1742.46 435.61 153041.59 68009.86 309969.77 00:22:33.474 00:22:33.474 22:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:33.474 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.474 No valid NVMe controllers or AIO or URING devices found 00:22:33.474 Initializing NVMe Controllers 00:22:33.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.474 Controller IO queue size 128, less than required. 00:22:33.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.474 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:33.474 Controller IO queue size 128, less than required. 00:22:33.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.474 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:33.474 WARNING: Some requested NVMe devices were skipped 00:22:33.474 22:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:33.474 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.010 Initializing NVMe Controllers 00:22:36.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.010 Controller IO queue size 128, less than required. 00:22:36.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.010 Controller IO queue size 128, less than required. 00:22:36.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:36.010 Initialization complete. Launching workers. 00:22:36.010 00:22:36.010 ==================== 00:22:36.010 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:36.010 TCP transport: 00:22:36.010 polls: 38309 00:22:36.010 idle_polls: 13270 00:22:36.010 sock_completions: 25039 00:22:36.010 nvme_completions: 4639 00:22:36.010 submitted_requests: 6980 00:22:36.010 queued_requests: 1 00:22:36.010 00:22:36.010 ==================== 00:22:36.010 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:36.010 TCP transport: 00:22:36.010 polls: 41167 00:22:36.010 idle_polls: 15282 00:22:36.010 sock_completions: 25885 00:22:36.010 nvme_completions: 4291 00:22:36.010 submitted_requests: 6430 00:22:36.010 queued_requests: 1 00:22:36.010 ======================================================== 00:22:36.010 Latency(us) 00:22:36.010 Device Information : IOPS MiB/s Average min max 00:22:36.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1159.49 289.87 114071.81 60431.59 188072.72 00:22:36.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1072.49 268.12 120741.25 37512.63 167452.01 00:22:36.010 ======================================================== 00:22:36.010 Total : 2231.98 557.99 117276.54 37512.63 188072.72 00:22:36.010 00:22:36.010 22:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:36.010 22:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.270 rmmod nvme_tcp 00:22:36.270 rmmod nvme_fabrics 00:22:36.270 rmmod nvme_keyring 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3778121 ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3778121 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3778121 ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3778121 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3778121 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3778121' 00:22:36.270 killing process with pid 3778121 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3778121 00:22:36.270 22:00:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3778121 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.660 22:00:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.192 22:00:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.193 00:22:40.193 real 0m23.889s 00:22:40.193 user 1m4.650s 00:22:40.193 sys 0m6.951s 00:22:40.193 22:00:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.193 22:00:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.193 ************************************ 00:22:40.193 END TEST nvmf_perf 00:22:40.193 ************************************ 00:22:40.193 22:00:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:40.193 22:00:33 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:40.193 22:00:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.193 22:00:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.193 22:00:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.193 ************************************ 00:22:40.193 START TEST nvmf_fio_host 00:22:40.193 ************************************ 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:40.193 * Looking for test storage... 00:22:40.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.193 22:00:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.537 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:45.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:45.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:45.538 Found net devices under 0000:86:00.0: cvl_0_0 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:45.538 Found net devices under 0000:86:00.1: cvl_0_1 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.538 22:00:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:45.538 00:22:45.538 --- 10.0.0.2 ping statistics --- 00:22:45.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.538 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:22:45.538 00:22:45.538 --- 10.0.0.1 ping statistics --- 00:22:45.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.538 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.538 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3784214 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3784214 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3784214 ']' 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.539 22:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.539 [2024-07-15 22:00:39.291734] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:22:45.539 [2024-07-15 22:00:39.291779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.539 [2024-07-15 22:00:39.348352] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.539 [2024-07-15 22:00:39.428944] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.539 [2024-07-15 22:00:39.428979] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.539 [2024-07-15 22:00:39.428985] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.539 [2024-07-15 22:00:39.428991] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.539 [2024-07-15 22:00:39.428996] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.539 [2024-07-15 22:00:39.429038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.539 [2024-07-15 22:00:39.429134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.539 [2024-07-15 22:00:39.429152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.539 [2024-07-15 22:00:39.429153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.105 [2024-07-15 22:00:40.265544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.105 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:46.362 Malloc1 00:22:46.363 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.621 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:46.879 22:00:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.879 [2024-07-15 22:00:41.027742] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.879 22:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:47.149 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:47.151 22:00:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.413 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:47.413 fio-3.35 00:22:47.413 Starting 1 thread 00:22:47.413 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.956 00:22:49.956 test: (groupid=0, jobs=1): err= 0: pid=3784828: Mon Jul 15 22:00:43 2024 00:22:49.956 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec) 00:22:49.956 slat (nsec): min=1557, max=249217, avg=1765.85, stdev=2306.45 00:22:49.957 clat (usec): min=3624, max=10273, avg=6080.08, stdev=467.63 00:22:49.957 lat (usec): min=3656, max=10274, avg=6081.84, stdev=467.61 00:22:49.957 clat percentiles (usec): 00:22:49.957 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:49.957 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:49.957 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:49.957 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9765], 00:22:49.957 | 99.99th=[10159] 00:22:49.957 bw ( KiB/s): min=45928, max=47152, per=99.95%, avg=46658.00, stdev=523.84, samples=4 00:22:49.957 iops : min=11482, max=11788, avg=11664.50, stdev=130.96, samples=4 00:22:49.957 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.8MiB/2005msec); 0 zone resets 00:22:49.957 slat (nsec): min=1630, max=233714, avg=1863.55, stdev=1733.18 00:22:49.957 clat (usec): min=2484, max=9120, avg=4894.69, stdev=388.43 00:22:49.957 lat (usec): min=2500, max=9122, avg=4896.55, stdev=388.46 00:22:49.957 clat percentiles (usec): 00:22:49.957 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:22:49.957 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:22:49.957 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:49.957 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7308], 99.95th=[ 8455], 00:22:49.957 | 99.99th=[ 9110] 00:22:49.957 bw ( KiB/s): min=46008, max=46784, per=100.00%, avg=46352.00, stdev=322.19, samples=4 00:22:49.957 iops : min=11502, max=11696, avg=11588.00, stdev=80.55, samples=4 00:22:49.957 lat (msec) : 4=0.50%, 10=99.48%, 20=0.02% 00:22:49.957 cpu : usr=70.26%, sys=25.65%, ctx=89, majf=0, minf=5 00:22:49.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:49.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.957 issued rwts: total=23399,23233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.957 00:22:49.957 Run status group 0 (all jobs): 00:22:49.957 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:22:49.957 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.8MiB (95.2MB), run=2005-2005msec 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.957 22:00:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.214 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:50.214 fio-3.35 00:22:50.214 Starting 1 thread 00:22:50.214 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.735 00:22:52.735 test: (groupid=0, jobs=1): err= 0: pid=3785357: Mon Jul 15 22:00:46 2024 00:22:52.735 read: IOPS=10.4k, BW=163MiB/s (171MB/s)(327MiB/2005msec) 00:22:52.735 slat (nsec): min=2599, max=87822, avg=2881.24, stdev=1354.52 00:22:52.735 clat (usec): min=2027, max=13758, avg=7381.82, stdev=1810.47 00:22:52.735 lat (usec): min=2030, max=13761, avg=7384.70, stdev=1810.65 00:22:52.735 clat percentiles (usec): 00:22:52.735 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5800], 00:22:52.735 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 7898], 00:22:52.735 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10421], 00:22:52.735 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13042], 99.95th=[13042], 00:22:52.735 | 99.99th=[13698] 00:22:52.735 bw ( KiB/s): min=75200, max=94304, per=49.71%, avg=83048.00, stdev=8140.73, samples=4 00:22:52.735 iops : min= 4700, max= 5894, avg=5190.50, stdev=508.80, samples=4 00:22:52.735 write: IOPS=6265, BW=97.9MiB/s (103MB/s)(169MiB/1728msec); 0 zone resets 00:22:52.735 slat (usec): min=30, max=383, avg=32.37, stdev= 7.83 00:22:52.735 clat (usec): min=2114, max=14880, avg=8585.62, stdev=1509.56 00:22:52.735 lat (usec): min=2146, max=14991, avg=8618.00, stdev=1511.56 00:22:52.735 clat percentiles (usec): 00:22:52.735 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7373], 00:22:52.735 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:52.735 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:22:52.735 | 99.00th=[12649], 99.50th=[13435], 99.90th=[14615], 99.95th=[14746], 00:22:52.735 | 99.99th=[14746] 00:22:52.735 bw ( KiB/s): min=76480, max=98272, per=85.89%, avg=86096.00, stdev=9147.09, samples=4 00:22:52.735 iops : min= 4780, max= 6142, avg=5381.00, stdev=571.69, samples=4 00:22:52.735 lat (msec) : 4=1.32%, 10=88.94%, 20=9.74% 00:22:52.735 cpu : usr=84.49%, sys=13.67%, ctx=104, majf=0, minf=2 00:22:52.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:52.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.735 issued rwts: total=20936,10826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.735 00:22:52.735 Run status group 0 (all jobs): 00:22:52.735 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=327MiB (343MB), run=2005-2005msec 00:22:52.735 WRITE: bw=97.9MiB/s (103MB/s), 97.9MiB/s-97.9MiB/s (103MB/s-103MB/s), io=169MiB (177MB), run=1728-1728msec 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.735 rmmod nvme_tcp 00:22:52.735 rmmod nvme_fabrics 00:22:52.735 rmmod nvme_keyring 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3784214 ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3784214 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3784214 ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3784214 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3784214 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3784214' 00:22:52.735 killing process with pid 3784214 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3784214 00:22:52.735 22:00:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3784214 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.991 22:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.889 22:00:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.889 00:22:54.889 real 0m15.111s 00:22:54.889 user 0m46.790s 00:22:54.889 sys 0m5.858s 00:22:54.889 22:00:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.889 22:00:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 ************************************ 00:22:54.889 END TEST nvmf_fio_host 00:22:54.889 ************************************ 00:22:55.147 22:00:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:55.147 22:00:49 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.147 22:00:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:55.147 22:00:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.147 22:00:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.147 ************************************ 00:22:55.147 START TEST nvmf_failover 00:22:55.147 ************************************ 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.147 * Looking for test storage... 00:22:55.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.147 22:00:49 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.148 22:00:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.399 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.400 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.400 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:23:00.400 00:23:00.400 --- 10.0.0.2 ping statistics --- 00:23:00.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.400 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:23:00.400 00:23:00.400 --- 10.0.0.1 ping statistics --- 00:23:00.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.400 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3789140 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3789140 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3789140 ']' 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.400 22:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.657 [2024-07-15 22:00:54.674498] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:00.657 [2024-07-15 22:00:54.674539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.657 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.657 [2024-07-15 22:00:54.734211] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.657 [2024-07-15 22:00:54.814114] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.657 [2024-07-15 22:00:54.814152] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.657 [2024-07-15 22:00:54.814159] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.657 [2024-07-15 22:00:54.814165] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.657 [2024-07-15 22:00:54.814170] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.657 [2024-07-15 22:00:54.814272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.657 [2024-07-15 22:00:54.814357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.657 [2024-07-15 22:00:54.814359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.586 [2024-07-15 22:00:55.683872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.586 22:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:01.842 Malloc0 00:23:01.842 22:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.098 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.098 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.354 [2024-07-15 22:00:56.435756] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.355 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.611 [2024-07-15 22:00:56.604203] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:02.611 [2024-07-15 22:00:56.776804] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3789422 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3789422 /var/tmp/bdevperf.sock 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3789422 ']' 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.611 22:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.867 22:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.867 22:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:02.867 22:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.430 NVMe0n1 00:23:03.430 22:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.687 00:23:03.687 22:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3789637 00:23:03.687 22:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.687 22:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:04.619 22:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.877 [2024-07-15 22:00:58.899307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.877 [2024-07-15 22:00:58.899453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 [2024-07-15 22:00:58.899741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b03160 is same with the state(5) to be set 00:23:04.878 22:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:08.231 22:01:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.231 00:23:08.231 22:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:08.231 [2024-07-15 22:01:02.399685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 [2024-07-15 22:01:02.399783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04090 is same with the state(5) to be set 00:23:08.231 22:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:11.508 22:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.508 [2024-07-15 22:01:05.594485] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.508 22:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:12.439 22:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:12.698 [2024-07-15 22:01:06.796057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.698 [2024-07-15 22:01:06.796320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.699 [2024-07-15 22:01:06.796326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.699 [2024-07-15 22:01:06.796333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b05030 is same with the state(5) to be set 00:23:12.699 22:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3789637 00:23:19.253 0 00:23:19.253 22:01:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3789422 00:23:19.253 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3789422 ']' 00:23:19.253 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3789422 00:23:19.253 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789422 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789422' 00:23:19.254 killing process with pid 3789422 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3789422 00:23:19.254 22:01:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3789422 00:23:19.254 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.254 [2024-07-15 22:00:56.834645] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:19.254 [2024-07-15 22:00:56.834694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789422 ] 00:23:19.254 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.254 [2024-07-15 22:00:56.890209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.254 [2024-07-15 22:00:56.965162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.254 Running I/O for 15 seconds... 00:23:19.254 [2024-07-15 22:00:58.901287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.254 [2024-07-15 22:00:58.901324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.254 [2024-07-15 22:00:58.901348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.901986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.901993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.254 [2024-07-15 22:00:58.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.254 [2024-07-15 22:00:58.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.255 [2024-07-15 22:00:58.902325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.902990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.902996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.903013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.255 [2024-07-15 22:00:58.903027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.255 [2024-07-15 22:00:58.903053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:23:19.255 [2024-07-15 22:00:58.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.255 [2024-07-15 22:00:58.903074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.255 [2024-07-15 22:00:58.903079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:23:19.255 [2024-07-15 22:00:58.903085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.255 [2024-07-15 22:00:58.903096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.255 [2024-07-15 22:00:58.903102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:23:19.255 [2024-07-15 22:00:58.903108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.255 [2024-07-15 22:00:58.903120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.255 [2024-07-15 22:00:58.903125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:23:19.255 [2024-07-15 22:00:58.903131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.255 [2024-07-15 22:00:58.903138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.903355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.903361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.903367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.903374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.256 [2024-07-15 22:00:58.913124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.256 [2024-07-15 22:00:58.913135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:23:19.256 [2024-07-15 22:00:58.913143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.913188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a58b0 was disconnected and freed. reset controller. 00:23:19.256 [2024-07-15 22:00:58.913200] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:19.256 [2024-07-15 22:00:58.913221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:00:58.913233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.913241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:00:58.913249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.913258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:00:58.913266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.913275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:00:58.913283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:00:58.913290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.256 [2024-07-15 22:00:58.913327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f670 (9): Bad file descriptor 00:23:19.256 [2024-07-15 22:00:58.916343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.256 [2024-07-15 22:00:58.991609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.256 [2024-07-15 22:01:02.399982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:01:02.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:01:02.400033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:01:02.400047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.256 [2024-07-15 22:01:02.400060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f670 is same with the state(5) to be set 00:23:19.256 [2024-07-15 22:01:02.400115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.256 [2024-07-15 22:01:02.400538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.256 [2024-07-15 22:01:02.400546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.400722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.400991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.400998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.401216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.257 [2024-07-15 22:01:02.401235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.257 [2024-07-15 22:01:02.401512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.257 [2024-07-15 22:01:02.401519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.401985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.402000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.402007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.402015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.402022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.402029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:02.402037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.402055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.258 [2024-07-15 22:01:02.402061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.258 [2024-07-15 22:01:02.402067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31528 len:8 PRP1 0x0 PRP2 0x0 00:23:19.258 [2024-07-15 22:01:02.402074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:02.402116] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ae3c0 was disconnected and freed. reset controller. 00:23:19.258 [2024-07-15 22:01:02.402127] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:19.258 [2024-07-15 22:01:02.402134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.258 [2024-07-15 22:01:02.404975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.258 [2024-07-15 22:01:02.405003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f670 (9): Bad file descriptor 00:23:19.258 [2024-07-15 22:01:02.438864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.258 [2024-07-15 22:01:06.797387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.258 [2024-07-15 22:01:06.797762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.258 [2024-07-15 22:01:06.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.258 [2024-07-15 22:01:06.797787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.797990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.797998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 22:01:06.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.259 [2024-07-15 22:01:06.798714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 22:01:06.798722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.260 [2024-07-15 22:01:06.798840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36760 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.798874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.798891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36768 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.798904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.798916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36776 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.798929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.798941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36784 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.798953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.798967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36792 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.798978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.798985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.798990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.798996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36800 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36808 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36816 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36824 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36832 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36840 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36848 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36856 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36864 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36872 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36880 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36888 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36896 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36904 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36912 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36920 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36928 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36936 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36944 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36952 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36960 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.799501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.799508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36968 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.799515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.799522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36976 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.809835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.809846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36984 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.809871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.809881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36992 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.809905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.809914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37000 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.809947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37008 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.809969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.809979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.809986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.809993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37016 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.810002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.810012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.810018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.810026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37024 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.810034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.810044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.810050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.260 [2024-07-15 22:01:06.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37032 len:8 PRP1 0x0 PRP2 0x0 00:23:19.260 [2024-07-15 22:01:06.810068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 22:01:06.810081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.260 [2024-07-15 22:01:06.810088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.261 [2024-07-15 22:01:06.810096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37040 len:8 PRP1 0x0 PRP2 0x0 00:23:19.261 [2024-07-15 22:01:06.810105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 22:01:06.810151] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10bb690 was disconnected and freed. reset controller. 00:23:19.261 [2024-07-15 22:01:06.810162] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:19.261 [2024-07-15 22:01:06.810188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 22:01:06.810198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 22:01:06.810210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 22:01:06.810219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 22:01:06.810234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 22:01:06.810243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 22:01:06.810254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 22:01:06.810263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 22:01:06.810272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.261 [2024-07-15 22:01:06.810297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f670 (9): Bad file descriptor 00:23:19.261 [2024-07-15 22:01:06.814178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.261 [2024-07-15 22:01:06.962707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.261 00:23:19.261 Latency(us) 00:23:19.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.261 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.261 Verification LBA range: start 0x0 length 0x4000 00:23:19.261 NVMe0n1 : 15.01 10894.07 42.55 756.81 0.00 10964.12 641.11 20629.59 00:23:19.261 =================================================================================================================== 00:23:19.261 Total : 10894.07 42.55 756.81 0.00 10964.12 641.11 20629.59 00:23:19.261 Received shutdown signal, test time was about 15.000000 seconds 00:23:19.261 00:23:19.261 Latency(us) 00:23:19.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.261 =================================================================================================================== 00:23:19.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3792162 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3792162 /var/tmp/bdevperf.sock 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3792162 ']' 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.261 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.824 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.824 22:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:19.824 22:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:20.081 [2024-07-15 22:01:14.141561] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.081 22:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:20.338 [2024-07-15 22:01:14.326101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:20.338 22:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.596 NVMe0n1 00:23:20.596 22:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.160 00:23:21.160 22:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.160 00:23:21.417 22:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.417 22:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:21.417 22:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.674 22:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:24.948 22:01:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.948 22:01:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:24.948 22:01:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.948 22:01:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3793088 00:23:24.948 22:01:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3793088 00:23:25.876 0 00:23:25.876 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.876 [2024-07-15 22:01:13.160056] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:25.876 [2024-07-15 22:01:13.160105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792162 ] 00:23:25.876 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.876 [2024-07-15 22:01:13.214493] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.876 [2024-07-15 22:01:13.283635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.876 [2024-07-15 22:01:15.771647] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:25.876 [2024-07-15 22:01:15.771694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.876 [2024-07-15 22:01:15.771705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.876 [2024-07-15 22:01:15.771713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.876 [2024-07-15 22:01:15.771720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.876 [2024-07-15 22:01:15.771728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.876 [2024-07-15 22:01:15.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.876 [2024-07-15 22:01:15.771742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.876 [2024-07-15 22:01:15.771748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.876 [2024-07-15 22:01:15.771754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.876 [2024-07-15 22:01:15.771781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.876 [2024-07-15 22:01:15.771794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbb670 (9): Bad file descriptor 00:23:25.876 [2024-07-15 22:01:15.915409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:25.876 Running I/O for 1 seconds... 00:23:25.876 00:23:25.876 Latency(us) 00:23:25.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.876 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:25.876 Verification LBA range: start 0x0 length 0x4000 00:23:25.876 NVMe0n1 : 1.00 11026.18 43.07 0.00 0.00 11554.98 712.35 14360.93 00:23:25.876 =================================================================================================================== 00:23:25.876 Total : 11026.18 43.07 0.00 0.00 11554.98 712.35 14360.93 00:23:25.876 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.876 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:26.133 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.390 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.390 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:26.648 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.648 22:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:29.927 22:01:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.927 22:01:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3792162 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3792162 ']' 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3792162 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3792162 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3792162' 00:23:29.927 killing process with pid 3792162 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3792162 00:23:29.927 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3792162 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.184 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.184 rmmod nvme_tcp 00:23:30.441 rmmod nvme_fabrics 00:23:30.441 rmmod nvme_keyring 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3789140 ']' 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3789140 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3789140 ']' 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3789140 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789140 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789140' 00:23:30.441 killing process with pid 3789140 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3789140 00:23:30.441 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3789140 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.698 22:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.599 22:01:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:32.599 00:23:32.599 real 0m37.593s 00:23:32.599 user 2m1.165s 00:23:32.599 sys 0m7.303s 00:23:32.599 22:01:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.599 22:01:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.599 ************************************ 00:23:32.599 END TEST nvmf_failover 00:23:32.599 ************************************ 00:23:32.599 22:01:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:32.599 22:01:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.599 22:01:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:32.599 22:01:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.599 22:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.856 ************************************ 00:23:32.856 START TEST nvmf_host_discovery 00:23:32.856 ************************************ 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.856 * Looking for test storage... 00:23:32.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.856 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.857 22:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:38.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:38.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.128 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:38.129 Found net devices under 0000:86:00.0: cvl_0_0 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:38.129 Found net devices under 0000:86:00.1: cvl_0_1 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.129 22:01:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:23:38.129 00:23:38.129 --- 10.0.0.2 ping statistics --- 00:23:38.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.129 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:23:38.129 00:23:38.129 --- 10.0.0.1 ping statistics --- 00:23:38.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.129 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3797519 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3797519 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3797519 ']' 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.129 22:01:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.129 [2024-07-15 22:01:32.326996] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:38.129 [2024-07-15 22:01:32.327037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.129 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.388 [2024-07-15 22:01:32.386331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.388 [2024-07-15 22:01:32.466571] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.388 [2024-07-15 22:01:32.466602] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.388 [2024-07-15 22:01:32.466609] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.388 [2024-07-15 22:01:32.466616] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.388 [2024-07-15 22:01:32.466621] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.388 [2024-07-15 22:01:32.466639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.954 [2024-07-15 22:01:33.160397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.954 [2024-07-15 22:01:33.172560] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.954 null0 00:23:38.954 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.955 null1 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.955 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3797553 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3797553 /tmp/host.sock 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3797553 ']' 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:39.212 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.212 22:01:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-07-15 22:01:33.247471] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:39.212 [2024-07-15 22:01:33.247514] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797553 ] 00:23:39.212 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.212 [2024-07-15 22:01:33.300930] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.212 [2024-07-15 22:01:33.378920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.144 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.145 [2024-07-15 22:01:34.375725] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.145 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:40.402 22:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:40.966 [2024-07-15 22:01:35.069746] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.966 [2024-07-15 22:01:35.069766] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.966 [2024-07-15 22:01:35.069779] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.966 [2024-07-15 22:01:35.157045] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:41.223 [2024-07-15 22:01:35.262123] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.223 [2024-07-15 22:01:35.262142] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.481 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.739 22:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.997 [2024-07-15 22:01:36.012125] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.997 [2024-07-15 22:01:36.013264] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.997 [2024-07-15 22:01:36.013285] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.997 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.998 [2024-07-15 22:01:36.101881] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.998 22:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:42.257 [2024-07-15 22:01:36.407155] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:42.257 [2024-07-15 22:01:36.407173] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:42.257 [2024-07-15 22:01:36.407178] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.191 [2024-07-15 22:01:37.276642] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:43.191 [2024-07-15 22:01:37.276663] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.191 [2024-07-15 22:01:37.282355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.191 [2024-07-15 22:01:37.282373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.191 [2024-07-15 22:01:37.282381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.191 [2024-07-15 22:01:37.282388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.191 [2024-07-15 22:01:37.282395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.191 [2024-07-15 22:01:37.282401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.191 [2024-07-15 22:01:37.282408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.191 [2024-07-15 22:01:37.282414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.191 [2024-07-15 22:01:37.282420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.191 [2024-07-15 22:01:37.292369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.191 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.191 [2024-07-15 22:01:37.302407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.191 [2024-07-15 22:01:37.302655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.191 [2024-07-15 22:01:37.302671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.191 [2024-07-15 22:01:37.302678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.191 [2024-07-15 22:01:37.302689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.191 [2024-07-15 22:01:37.302700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.191 [2024-07-15 22:01:37.302706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.191 [2024-07-15 22:01:37.302713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.191 [2024-07-15 22:01:37.302727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.191 [2024-07-15 22:01:37.312460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.191 [2024-07-15 22:01:37.312770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.191 [2024-07-15 22:01:37.312784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.191 [2024-07-15 22:01:37.312791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.191 [2024-07-15 22:01:37.312801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.191 [2024-07-15 22:01:37.312825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.191 [2024-07-15 22:01:37.312832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.191 [2024-07-15 22:01:37.312839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.191 [2024-07-15 22:01:37.312848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.191 [2024-07-15 22:01:37.322511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.191 [2024-07-15 22:01:37.322827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.191 [2024-07-15 22:01:37.322842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.191 [2024-07-15 22:01:37.322849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.191 [2024-07-15 22:01:37.322860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.191 [2024-07-15 22:01:37.322884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.191 [2024-07-15 22:01:37.322891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.191 [2024-07-15 22:01:37.322898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.192 [2024-07-15 22:01:37.322908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:43.192 [2024-07-15 22:01:37.332568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.192 [2024-07-15 22:01:37.332851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.192 [2024-07-15 22:01:37.332865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.192 [2024-07-15 22:01:37.332872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.192 [2024-07-15 22:01:37.332884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.192 [2024-07-15 22:01:37.332900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.192 [2024-07-15 22:01:37.332910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.192 [2024-07-15 22:01:37.332917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.192 [2024-07-15 22:01:37.332932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.192 [2024-07-15 22:01:37.342621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.192 [2024-07-15 22:01:37.342787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.192 [2024-07-15 22:01:37.342801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.192 [2024-07-15 22:01:37.342809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.192 [2024-07-15 22:01:37.342819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.192 [2024-07-15 22:01:37.342829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.192 [2024-07-15 22:01:37.342836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.192 [2024-07-15 22:01:37.342842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.192 [2024-07-15 22:01:37.342852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.192 [2024-07-15 22:01:37.352679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.192 [2024-07-15 22:01:37.352946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.192 [2024-07-15 22:01:37.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.192 [2024-07-15 22:01:37.352967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.192 [2024-07-15 22:01:37.352978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.192 [2024-07-15 22:01:37.352994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.192 [2024-07-15 22:01:37.353000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.192 [2024-07-15 22:01:37.353008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.192 [2024-07-15 22:01:37.353017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.192 [2024-07-15 22:01:37.362729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.192 [2024-07-15 22:01:37.363032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.192 [2024-07-15 22:01:37.363046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb930f0 with addr=10.0.0.2, port=4420 00:23:43.192 [2024-07-15 22:01:37.363053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb930f0 is same with the state(5) to be set 00:23:43.192 [2024-07-15 22:01:37.363064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb930f0 (9): Bad file descriptor 00:23:43.192 [2024-07-15 22:01:37.363078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.192 [2024-07-15 22:01:37.363084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.192 [2024-07-15 22:01:37.363091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.192 [2024-07-15 22:01:37.363100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.192 [2024-07-15 22:01:37.363635] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:43.192 [2024-07-15 22:01:37.363650] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.192 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.450 22:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.823 [2024-07-15 22:01:38.700396] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:44.823 [2024-07-15 22:01:38.700411] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:44.823 [2024-07-15 22:01:38.700424] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.823 [2024-07-15 22:01:38.786684] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:44.823 [2024-07-15 22:01:39.056475] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.823 [2024-07-15 22:01:39.056502] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.823 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 request: 00:23:45.082 { 00:23:45.082 "name": "nvme", 00:23:45.082 "trtype": "tcp", 00:23:45.082 "traddr": "10.0.0.2", 00:23:45.082 "adrfam": "ipv4", 00:23:45.082 "trsvcid": "8009", 00:23:45.082 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:45.082 "wait_for_attach": true, 00:23:45.082 "method": "bdev_nvme_start_discovery", 00:23:45.082 "req_id": 1 00:23:45.082 } 00:23:45.082 Got JSON-RPC error response 00:23:45.082 response: 00:23:45.082 { 00:23:45.082 "code": -17, 00:23:45.082 "message": "File exists" 00:23:45.082 } 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 request: 00:23:45.082 { 00:23:45.082 "name": "nvme_second", 00:23:45.082 "trtype": "tcp", 00:23:45.082 "traddr": "10.0.0.2", 00:23:45.082 "adrfam": "ipv4", 00:23:45.082 "trsvcid": "8009", 00:23:45.082 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:45.082 "wait_for_attach": true, 00:23:45.082 "method": "bdev_nvme_start_discovery", 00:23:45.082 "req_id": 1 00:23:45.082 } 00:23:45.082 Got JSON-RPC error response 00:23:45.082 response: 00:23:45.082 { 00:23:45.082 "code": -17, 00:23:45.082 "message": "File exists" 00:23:45.082 } 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.082 22:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.456 [2024-07-15 22:01:40.300035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.456 [2024-07-15 22:01:40.300069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd9200 with addr=10.0.0.2, port=8010 00:23:46.456 [2024-07-15 22:01:40.300086] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:46.456 [2024-07-15 22:01:40.300092] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:46.456 [2024-07-15 22:01:40.300098] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:47.389 [2024-07-15 22:01:41.302401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.389 [2024-07-15 22:01:41.302427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd9200 with addr=10.0.0.2, port=8010 00:23:47.389 [2024-07-15 22:01:41.302442] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:47.389 [2024-07-15 22:01:41.302448] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:47.389 [2024-07-15 22:01:41.302454] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:48.323 [2024-07-15 22:01:42.304571] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:48.323 request: 00:23:48.323 { 00:23:48.323 "name": "nvme_second", 00:23:48.323 "trtype": "tcp", 00:23:48.323 "traddr": "10.0.0.2", 00:23:48.323 "adrfam": "ipv4", 00:23:48.323 "trsvcid": "8010", 00:23:48.323 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:48.323 "wait_for_attach": false, 00:23:48.323 "attach_timeout_ms": 3000, 00:23:48.323 "method": "bdev_nvme_start_discovery", 00:23:48.323 "req_id": 1 00:23:48.323 } 00:23:48.323 Got JSON-RPC error response 00:23:48.323 response: 00:23:48.323 { 00:23:48.323 "code": -110, 00:23:48.323 "message": "Connection timed out" 00:23:48.323 } 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3797553 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.323 rmmod nvme_tcp 00:23:48.323 rmmod nvme_fabrics 00:23:48.323 rmmod nvme_keyring 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3797519 ']' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3797519 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3797519 ']' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3797519 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3797519 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3797519' 00:23:48.323 killing process with pid 3797519 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3797519 00:23:48.323 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3797519 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.581 22:01:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.481 22:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.481 00:23:50.481 real 0m17.847s 00:23:50.481 user 0m22.717s 00:23:50.481 sys 0m5.303s 00:23:50.481 22:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.481 22:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.481 ************************************ 00:23:50.481 END TEST nvmf_host_discovery 00:23:50.481 ************************************ 00:23:50.740 22:01:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.740 22:01:44 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.740 22:01:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.740 22:01:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.740 22:01:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.740 ************************************ 00:23:50.740 START TEST nvmf_host_multipath_status 00:23:50.740 ************************************ 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.740 * Looking for test storage... 00:23:50.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.740 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.741 22:01:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.004 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.005 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.005 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.005 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:23:56.263 00:23:56.263 --- 10.0.0.2 ping statistics --- 00:23:56.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.263 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:23:56.263 00:23:56.263 --- 10.0.0.1 ping statistics --- 00:23:56.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.263 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3802710 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3802710 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3802710 ']' 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.263 22:01:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.263 [2024-07-15 22:01:50.500734] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:23:56.263 [2024-07-15 22:01:50.500779] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.521 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.521 [2024-07-15 22:01:50.559037] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.521 [2024-07-15 22:01:50.640791] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.521 [2024-07-15 22:01:50.640826] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.521 [2024-07-15 22:01:50.640833] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.521 [2024-07-15 22:01:50.640839] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.521 [2024-07-15 22:01:50.640845] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.521 [2024-07-15 22:01:50.640886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.521 [2024-07-15 22:01:50.640888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.085 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.085 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:57.085 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.085 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.085 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.343 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.343 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3802710 00:23:57.343 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.343 [2024-07-15 22:01:51.498208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.343 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.627 Malloc0 00:23:57.627 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:57.891 22:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.891 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.148 [2024-07-15 22:01:52.225850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.148 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.148 [2024-07-15 22:01:52.386247] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3803100 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3803100 /var/tmp/bdevperf.sock 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3803100 ']' 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.405 22:01:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.333 22:01:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.333 22:01:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:59.333 22:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:59.333 22:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:59.590 Nvme0n1 00:23:59.846 22:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:00.111 Nvme0n1 00:24:00.111 22:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:00.111 22:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:02.628 22:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:02.628 22:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:02.628 22:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.628 22:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:03.559 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:03.559 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.559 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.559 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.816 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.816 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.816 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.816 22:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.816 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.816 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.816 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.816 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.072 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.072 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.072 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.072 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.328 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.584 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.584 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:04.584 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.841 22:01:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.097 22:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:06.028 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:06.028 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:06.028 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.028 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.285 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.542 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.542 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.542 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.542 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.799 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.799 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.799 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.799 22:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:07.055 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.313 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.570 22:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:08.501 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:08.501 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.501 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.501 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.757 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.757 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.757 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.757 22:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.014 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.270 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.270 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.270 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.270 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:09.526 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.783 22:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.040 22:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:10.970 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:10.970 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:10.970 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.970 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.227 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.227 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.227 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.227 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.483 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.483 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.483 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.483 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.740 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.741 22:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.998 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.998 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.998 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.998 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.255 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.255 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:12.255 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:12.255 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:12.511 22:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:13.441 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:13.441 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:13.441 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.441 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.698 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.698 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.698 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.698 22:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.955 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.955 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.955 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.955 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.211 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.468 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.468 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:14.468 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.468 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.748 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.748 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:14.748 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:14.748 22:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.004 22:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:15.960 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:15.960 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:15.960 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.960 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.228 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.228 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.228 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.229 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.229 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.229 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.229 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.229 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.484 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.484 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.484 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.484 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.740 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.740 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:16.740 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.740 22:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.996 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.997 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.997 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.997 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.997 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.997 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:17.254 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:17.254 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:17.512 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.770 22:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:18.706 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:18.706 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.706 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.706 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.964 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.964 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.964 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.964 22:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.964 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.964 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.964 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.964 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.222 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.222 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.222 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.222 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.481 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.739 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.739 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:19.739 22:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.997 22:02:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.255 22:02:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:21.190 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:21.190 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:21.190 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.190 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.449 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.708 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.708 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.708 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.708 22:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.968 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.968 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.968 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.968 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:22.227 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.486 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:22.746 22:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:23.680 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:23.681 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:23.681 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.681 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.939 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.939 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:23.939 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.939 22:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.939 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.939 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.939 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.939 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.198 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.198 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.198 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.198 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.457 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.457 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.457 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.457 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:24.715 22:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.973 22:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.232 22:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:26.169 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:26.169 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:26.169 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.169 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.427 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.427 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.427 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.427 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.686 22:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.944 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.944 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.944 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.944 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.203 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.203 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.203 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.203 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3803100 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3803100 ']' 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3803100 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.464 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803100 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803100' 00:24:27.465 killing process with pid 3803100 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3803100 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3803100 00:24:27.465 Connection closed with partial response: 00:24:27.465 00:24:27.465 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3803100 00:24:27.465 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.465 [2024-07-15 22:01:52.447607] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:24:27.465 [2024-07-15 22:01:52.447664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803100 ] 00:24:27.465 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.465 [2024-07-15 22:01:52.499140] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.465 [2024-07-15 22:01:52.574668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.465 Running I/O for 90 seconds... 00:24:27.465 [2024-07-15 22:02:06.465450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.465982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.465989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.465 [2024-07-15 22:02:06.466430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.465 [2024-07-15 22:02:06.466444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.466451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.466503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.466526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.466547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.466690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.466697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.466 [2024-07-15 22:02:06.467934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.467955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.467979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.467994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.466 [2024-07-15 22:02:06.468265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.466 [2024-07-15 22:02:06.468281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.468980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.468987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.469004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.467 [2024-07-15 22:02:06.469011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.467 [2024-07-15 22:02:06.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.468 [2024-07-15 22:02:06.469440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.468 [2024-07-15 22:02:06.469896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.468 [2024-07-15 22:02:06.469915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.469922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.469942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.469950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.469969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.469976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.470003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.470022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.470028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.470047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.470055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:06.470075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:06.470082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.469 [2024-07-15 22:02:19.316865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.316985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.469 [2024-07-15 22:02:19.316998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.469 [2024-07-15 22:02:19.317004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.317861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.470 [2024-07-15 22:02:19.317867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.470 [2024-07-15 22:02:19.318793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.318988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.318995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.319015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.319034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.319053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.471 [2024-07-15 22:02:19.319073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.471 [2024-07-15 22:02:19.319092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.471 [2024-07-15 22:02:19.319112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.471 [2024-07-15 22:02:19.319126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.471 [2024-07-15 22:02:19.319133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.471 Received shutdown signal, test time was about 27.106098 seconds 00:24:27.471 00:24:27.471 Latency(us) 00:24:27.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.471 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:27.471 Verification LBA range: start 0x0 length 0x4000 00:24:27.471 Nvme0n1 : 27.11 10286.20 40.18 0.00 0.00 12422.36 658.92 3019898.88 00:24:27.471 =================================================================================================================== 00:24:27.471 Total : 10286.20 40.18 0.00 0.00 12422.36 658.92 3019898.88 00:24:27.471 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.730 rmmod nvme_tcp 00:24:27.730 rmmod nvme_fabrics 00:24:27.730 rmmod nvme_keyring 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3802710 ']' 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3802710 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3802710 ']' 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3802710 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.730 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802710 00:24:27.990 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.990 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.990 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802710' 00:24:27.990 killing process with pid 3802710 00:24:27.990 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3802710 00:24:27.990 22:02:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3802710 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.990 22:02:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.526 22:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.526 00:24:30.526 real 0m39.478s 00:24:30.526 user 1m46.523s 00:24:30.526 sys 0m10.712s 00:24:30.526 22:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.526 22:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.526 ************************************ 00:24:30.526 END TEST nvmf_host_multipath_status 00:24:30.526 ************************************ 00:24:30.526 22:02:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.526 22:02:24 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.526 22:02:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.526 22:02:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.526 22:02:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.526 ************************************ 00:24:30.526 START TEST nvmf_discovery_remove_ifc 00:24:30.526 ************************************ 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.526 * Looking for test storage... 00:24:30.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.526 22:02:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:35.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:35.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:35.826 Found net devices under 0000:86:00.0: cvl_0_0 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:35.826 Found net devices under 0000:86:00.1: cvl_0_1 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:24:35.826 00:24:35.826 --- 10.0.0.2 ping statistics --- 00:24:35.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.826 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:24:35.826 00:24:35.826 --- 10.0.0.1 ping statistics --- 00:24:35.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.826 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3811410 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3811410 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3811410 ']' 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.826 22:02:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.826 [2024-07-15 22:02:29.697056] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:24:35.826 [2024-07-15 22:02:29.697100] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.826 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.826 [2024-07-15 22:02:29.753082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.826 [2024-07-15 22:02:29.831632] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.826 [2024-07-15 22:02:29.831665] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.826 [2024-07-15 22:02:29.831672] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.826 [2024-07-15 22:02:29.831679] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.826 [2024-07-15 22:02:29.831684] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.826 [2024-07-15 22:02:29.831700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.392 [2024-07-15 22:02:30.545344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.392 [2024-07-15 22:02:30.553449] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:36.392 null0 00:24:36.392 [2024-07-15 22:02:30.585463] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3811654 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3811654 /tmp/host.sock 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3811654 ']' 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:36.392 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.392 22:02:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.652 [2024-07-15 22:02:30.652354] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:24:36.652 [2024-07-15 22:02:30.652395] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811654 ] 00:24:36.652 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.652 [2024-07-15 22:02:30.705407] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.652 [2024-07-15 22:02:30.777751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.218 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.477 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.477 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:37.477 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.477 22:02:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.413 [2024-07-15 22:02:32.543490] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:38.413 [2024-07-15 22:02:32.543510] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:38.413 [2024-07-15 22:02:32.543523] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.413 [2024-07-15 22:02:32.630795] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:38.671 [2024-07-15 22:02:32.857385] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:38.671 [2024-07-15 22:02:32.857434] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:38.671 [2024-07-15 22:02:32.857455] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:38.671 [2024-07-15 22:02:32.857468] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:38.671 [2024-07-15 22:02:32.857487] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.671 [2024-07-15 22:02:32.862863] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x130aec0 was disconnected and freed. delete nvme_qpair. 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:38.671 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:38.929 22:02:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.929 22:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.864 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.123 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.123 22:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.059 22:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.996 22:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.374 22:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.309 [2024-07-15 22:02:38.298692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:44.309 [2024-07-15 22:02:38.298728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.309 [2024-07-15 22:02:38.298742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.309 [2024-07-15 22:02:38.298751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.309 [2024-07-15 22:02:38.298758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.309 [2024-07-15 22:02:38.298766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.309 [2024-07-15 22:02:38.298773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.309 [2024-07-15 22:02:38.298780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.309 [2024-07-15 22:02:38.298787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.309 [2024-07-15 22:02:38.298794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.309 [2024-07-15 22:02:38.298801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.309 [2024-07-15 22:02:38.298808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1810 is same with the state(5) to be set 00:24:44.309 [2024-07-15 22:02:38.308714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1810 (9): Bad file descriptor 00:24:44.309 [2024-07-15 22:02:38.318753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.309 22:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.243 [2024-07-15 22:02:39.330244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:45.243 [2024-07-15 22:02:39.330280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d1810 with addr=10.0.0.2, port=4420 00:24:45.243 [2024-07-15 22:02:39.330294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1810 is same with the state(5) to be set 00:24:45.243 [2024-07-15 22:02:39.330317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1810 (9): Bad file descriptor 00:24:45.243 [2024-07-15 22:02:39.330360] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:45.243 [2024-07-15 22:02:39.330377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:45.243 [2024-07-15 22:02:39.330387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:45.243 [2024-07-15 22:02:39.330397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:45.243 [2024-07-15 22:02:39.330414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.243 [2024-07-15 22:02:39.330429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.243 22:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.233 [2024-07-15 22:02:40.332911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:46.233 [2024-07-15 22:02:40.332937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:46.233 [2024-07-15 22:02:40.332944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:46.233 [2024-07-15 22:02:40.332952] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:46.233 [2024-07-15 22:02:40.332964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.233 [2024-07-15 22:02:40.332983] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:46.233 [2024-07-15 22:02:40.333003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.233 [2024-07-15 22:02:40.333013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.233 [2024-07-15 22:02:40.333022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.233 [2024-07-15 22:02:40.333029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.233 [2024-07-15 22:02:40.333036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.233 [2024-07-15 22:02:40.333043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.234 [2024-07-15 22:02:40.333050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.234 [2024-07-15 22:02:40.333058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.234 [2024-07-15 22:02:40.333065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.234 [2024-07-15 22:02:40.333071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.234 [2024-07-15 22:02:40.333079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:46.234 [2024-07-15 22:02:40.333089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d0c90 (9): Bad file descriptor 00:24:46.234 [2024-07-15 22:02:40.333975] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:46.234 [2024-07-15 22:02:40.333987] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.234 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:46.492 22:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:47.442 22:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.379 [2024-07-15 22:02:42.348345] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.379 [2024-07-15 22:02:42.348363] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.379 [2024-07-15 22:02:42.348377] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.379 [2024-07-15 22:02:42.475758] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.379 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.639 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.639 [2024-07-15 22:02:42.656416] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:48.639 [2024-07-15 22:02:42.656454] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:48.639 [2024-07-15 22:02:42.656473] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:48.639 [2024-07-15 22:02:42.656487] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:48.639 [2024-07-15 22:02:42.656494] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.639 [2024-07-15 22:02:42.657887] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12bfb10 was disconnected and freed. delete nvme_qpair. 00:24:48.639 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:48.639 22:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3811654 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3811654 ']' 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3811654 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811654 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811654' 00:24:49.575 killing process with pid 3811654 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3811654 00:24:49.575 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3811654 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.834 rmmod nvme_tcp 00:24:49.834 rmmod nvme_fabrics 00:24:49.834 rmmod nvme_keyring 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3811410 ']' 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3811410 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3811410 ']' 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3811410 00:24:49.834 22:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811410 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811410' 00:24:49.834 killing process with pid 3811410 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3811410 00:24:49.834 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3811410 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.093 22:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.628 22:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.628 00:24:52.628 real 0m21.956s 00:24:52.628 user 0m28.786s 00:24:52.628 sys 0m5.188s 00:24:52.628 22:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:52.628 22:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.628 ************************************ 00:24:52.628 END TEST nvmf_discovery_remove_ifc 00:24:52.628 ************************************ 00:24:52.628 22:02:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:52.628 22:02:46 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.628 22:02:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:52.628 22:02:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.628 22:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.628 ************************************ 00:24:52.628 START TEST nvmf_identify_kernel_target 00:24:52.628 ************************************ 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.628 * Looking for test storage... 00:24:52.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.628 22:02:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.972 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.972 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.972 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.972 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.972 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:24:57.973 00:24:57.973 --- 10.0.0.2 ping statistics --- 00:24:57.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.973 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:57.973 00:24:57.973 --- 10.0.0.1 ping statistics --- 00:24:57.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.973 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:57.973 22:02:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:57.973 22:02:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:00.530 Waiting for block devices as requested 00:25:00.530 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:00.530 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:00.789 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:00.789 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:00.789 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:00.789 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.047 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.047 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:01.047 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:01.047 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.306 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.306 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.565 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.565 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.565 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:01.565 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:01.823 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:01.824 No valid GPT data, bailing 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:01.824 22:02:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:01.824 00:25:01.824 Discovery Log Number of Records 2, Generation counter 2 00:25:01.824 =====Discovery Log Entry 0====== 00:25:01.824 trtype: tcp 00:25:01.824 adrfam: ipv4 00:25:01.824 subtype: current discovery subsystem 00:25:01.824 treq: not specified, sq flow control disable supported 00:25:01.824 portid: 1 00:25:01.824 trsvcid: 4420 00:25:01.824 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:01.824 traddr: 10.0.0.1 00:25:01.824 eflags: none 00:25:01.824 sectype: none 00:25:01.824 =====Discovery Log Entry 1====== 00:25:01.824 trtype: tcp 00:25:01.824 adrfam: ipv4 00:25:01.824 subtype: nvme subsystem 00:25:01.824 treq: not specified, sq flow control disable supported 00:25:01.824 portid: 1 00:25:01.824 trsvcid: 4420 00:25:01.824 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:01.824 traddr: 10.0.0.1 00:25:01.824 eflags: none 00:25:01.824 sectype: none 00:25:01.824 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:01.824 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:02.083 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.083 ===================================================== 00:25:02.083 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:02.083 ===================================================== 00:25:02.083 Controller Capabilities/Features 00:25:02.083 ================================ 00:25:02.083 Vendor ID: 0000 00:25:02.083 Subsystem Vendor ID: 0000 00:25:02.083 Serial Number: f342ef0b295d6b1aa24d 00:25:02.083 Model Number: Linux 00:25:02.083 Firmware Version: 6.7.0-68 00:25:02.083 Recommended Arb Burst: 0 00:25:02.083 IEEE OUI Identifier: 00 00 00 00:25:02.083 Multi-path I/O 00:25:02.083 May have multiple subsystem ports: No 00:25:02.083 May have multiple controllers: No 00:25:02.083 Associated with SR-IOV VF: No 00:25:02.083 Max Data Transfer Size: Unlimited 00:25:02.083 Max Number of Namespaces: 0 00:25:02.083 Max Number of I/O Queues: 1024 00:25:02.083 NVMe Specification Version (VS): 1.3 00:25:02.083 NVMe Specification Version (Identify): 1.3 00:25:02.083 Maximum Queue Entries: 1024 00:25:02.083 Contiguous Queues Required: No 00:25:02.083 Arbitration Mechanisms Supported 00:25:02.083 Weighted Round Robin: Not Supported 00:25:02.083 Vendor Specific: Not Supported 00:25:02.083 Reset Timeout: 7500 ms 00:25:02.083 Doorbell Stride: 4 bytes 00:25:02.083 NVM Subsystem Reset: Not Supported 00:25:02.083 Command Sets Supported 00:25:02.083 NVM Command Set: Supported 00:25:02.083 Boot Partition: Not Supported 00:25:02.083 Memory Page Size Minimum: 4096 bytes 00:25:02.084 Memory Page Size Maximum: 4096 bytes 00:25:02.084 Persistent Memory Region: Not Supported 00:25:02.084 Optional Asynchronous Events Supported 00:25:02.084 Namespace Attribute Notices: Not Supported 00:25:02.084 Firmware Activation Notices: Not Supported 00:25:02.084 ANA Change Notices: Not Supported 00:25:02.084 PLE Aggregate Log Change Notices: Not Supported 00:25:02.084 LBA Status Info Alert Notices: Not Supported 00:25:02.084 EGE Aggregate Log Change Notices: Not Supported 00:25:02.084 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.084 Zone Descriptor Change Notices: Not Supported 00:25:02.084 Discovery Log Change Notices: Supported 00:25:02.084 Controller Attributes 00:25:02.084 128-bit Host Identifier: Not Supported 00:25:02.084 Non-Operational Permissive Mode: Not Supported 00:25:02.084 NVM Sets: Not Supported 00:25:02.084 Read Recovery Levels: Not Supported 00:25:02.084 Endurance Groups: Not Supported 00:25:02.084 Predictable Latency Mode: Not Supported 00:25:02.084 Traffic Based Keep ALive: Not Supported 00:25:02.084 Namespace Granularity: Not Supported 00:25:02.084 SQ Associations: Not Supported 00:25:02.084 UUID List: Not Supported 00:25:02.084 Multi-Domain Subsystem: Not Supported 00:25:02.084 Fixed Capacity Management: Not Supported 00:25:02.084 Variable Capacity Management: Not Supported 00:25:02.084 Delete Endurance Group: Not Supported 00:25:02.084 Delete NVM Set: Not Supported 00:25:02.084 Extended LBA Formats Supported: Not Supported 00:25:02.084 Flexible Data Placement Supported: Not Supported 00:25:02.084 00:25:02.084 Controller Memory Buffer Support 00:25:02.084 ================================ 00:25:02.084 Supported: No 00:25:02.084 00:25:02.084 Persistent Memory Region Support 00:25:02.084 ================================ 00:25:02.084 Supported: No 00:25:02.084 00:25:02.084 Admin Command Set Attributes 00:25:02.084 ============================ 00:25:02.084 Security Send/Receive: Not Supported 00:25:02.084 Format NVM: Not Supported 00:25:02.084 Firmware Activate/Download: Not Supported 00:25:02.084 Namespace Management: Not Supported 00:25:02.084 Device Self-Test: Not Supported 00:25:02.084 Directives: Not Supported 00:25:02.084 NVMe-MI: Not Supported 00:25:02.084 Virtualization Management: Not Supported 00:25:02.084 Doorbell Buffer Config: Not Supported 00:25:02.084 Get LBA Status Capability: Not Supported 00:25:02.084 Command & Feature Lockdown Capability: Not Supported 00:25:02.084 Abort Command Limit: 1 00:25:02.084 Async Event Request Limit: 1 00:25:02.084 Number of Firmware Slots: N/A 00:25:02.084 Firmware Slot 1 Read-Only: N/A 00:25:02.084 Firmware Activation Without Reset: N/A 00:25:02.084 Multiple Update Detection Support: N/A 00:25:02.084 Firmware Update Granularity: No Information Provided 00:25:02.084 Per-Namespace SMART Log: No 00:25:02.084 Asymmetric Namespace Access Log Page: Not Supported 00:25:02.084 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:02.084 Command Effects Log Page: Not Supported 00:25:02.084 Get Log Page Extended Data: Supported 00:25:02.084 Telemetry Log Pages: Not Supported 00:25:02.084 Persistent Event Log Pages: Not Supported 00:25:02.084 Supported Log Pages Log Page: May Support 00:25:02.084 Commands Supported & Effects Log Page: Not Supported 00:25:02.084 Feature Identifiers & Effects Log Page:May Support 00:25:02.084 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.084 Data Area 4 for Telemetry Log: Not Supported 00:25:02.084 Error Log Page Entries Supported: 1 00:25:02.084 Keep Alive: Not Supported 00:25:02.084 00:25:02.084 NVM Command Set Attributes 00:25:02.084 ========================== 00:25:02.084 Submission Queue Entry Size 00:25:02.084 Max: 1 00:25:02.084 Min: 1 00:25:02.084 Completion Queue Entry Size 00:25:02.084 Max: 1 00:25:02.084 Min: 1 00:25:02.084 Number of Namespaces: 0 00:25:02.084 Compare Command: Not Supported 00:25:02.084 Write Uncorrectable Command: Not Supported 00:25:02.084 Dataset Management Command: Not Supported 00:25:02.084 Write Zeroes Command: Not Supported 00:25:02.084 Set Features Save Field: Not Supported 00:25:02.084 Reservations: Not Supported 00:25:02.084 Timestamp: Not Supported 00:25:02.084 Copy: Not Supported 00:25:02.084 Volatile Write Cache: Not Present 00:25:02.084 Atomic Write Unit (Normal): 1 00:25:02.084 Atomic Write Unit (PFail): 1 00:25:02.084 Atomic Compare & Write Unit: 1 00:25:02.084 Fused Compare & Write: Not Supported 00:25:02.084 Scatter-Gather List 00:25:02.084 SGL Command Set: Supported 00:25:02.084 SGL Keyed: Not Supported 00:25:02.084 SGL Bit Bucket Descriptor: Not Supported 00:25:02.084 SGL Metadata Pointer: Not Supported 00:25:02.084 Oversized SGL: Not Supported 00:25:02.084 SGL Metadata Address: Not Supported 00:25:02.084 SGL Offset: Supported 00:25:02.084 Transport SGL Data Block: Not Supported 00:25:02.084 Replay Protected Memory Block: Not Supported 00:25:02.084 00:25:02.084 Firmware Slot Information 00:25:02.084 ========================= 00:25:02.084 Active slot: 0 00:25:02.084 00:25:02.084 00:25:02.084 Error Log 00:25:02.084 ========= 00:25:02.084 00:25:02.084 Active Namespaces 00:25:02.084 ================= 00:25:02.084 Discovery Log Page 00:25:02.084 ================== 00:25:02.084 Generation Counter: 2 00:25:02.084 Number of Records: 2 00:25:02.084 Record Format: 0 00:25:02.084 00:25:02.084 Discovery Log Entry 0 00:25:02.084 ---------------------- 00:25:02.084 Transport Type: 3 (TCP) 00:25:02.084 Address Family: 1 (IPv4) 00:25:02.084 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:02.084 Entry Flags: 00:25:02.084 Duplicate Returned Information: 0 00:25:02.084 Explicit Persistent Connection Support for Discovery: 0 00:25:02.084 Transport Requirements: 00:25:02.084 Secure Channel: Not Specified 00:25:02.084 Port ID: 1 (0x0001) 00:25:02.084 Controller ID: 65535 (0xffff) 00:25:02.084 Admin Max SQ Size: 32 00:25:02.084 Transport Service Identifier: 4420 00:25:02.084 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:02.084 Transport Address: 10.0.0.1 00:25:02.084 Discovery Log Entry 1 00:25:02.084 ---------------------- 00:25:02.084 Transport Type: 3 (TCP) 00:25:02.084 Address Family: 1 (IPv4) 00:25:02.084 Subsystem Type: 2 (NVM Subsystem) 00:25:02.084 Entry Flags: 00:25:02.084 Duplicate Returned Information: 0 00:25:02.084 Explicit Persistent Connection Support for Discovery: 0 00:25:02.084 Transport Requirements: 00:25:02.084 Secure Channel: Not Specified 00:25:02.084 Port ID: 1 (0x0001) 00:25:02.084 Controller ID: 65535 (0xffff) 00:25:02.084 Admin Max SQ Size: 32 00:25:02.084 Transport Service Identifier: 4420 00:25:02.084 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:02.084 Transport Address: 10.0.0.1 00:25:02.084 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:02.084 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.084 get_feature(0x01) failed 00:25:02.084 get_feature(0x02) failed 00:25:02.084 get_feature(0x04) failed 00:25:02.084 ===================================================== 00:25:02.084 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:02.084 ===================================================== 00:25:02.084 Controller Capabilities/Features 00:25:02.084 ================================ 00:25:02.084 Vendor ID: 0000 00:25:02.084 Subsystem Vendor ID: 0000 00:25:02.084 Serial Number: 386a748153a0d5eb359b 00:25:02.084 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:02.084 Firmware Version: 6.7.0-68 00:25:02.084 Recommended Arb Burst: 6 00:25:02.084 IEEE OUI Identifier: 00 00 00 00:25:02.084 Multi-path I/O 00:25:02.084 May have multiple subsystem ports: Yes 00:25:02.084 May have multiple controllers: Yes 00:25:02.084 Associated with SR-IOV VF: No 00:25:02.084 Max Data Transfer Size: Unlimited 00:25:02.084 Max Number of Namespaces: 1024 00:25:02.084 Max Number of I/O Queues: 128 00:25:02.084 NVMe Specification Version (VS): 1.3 00:25:02.084 NVMe Specification Version (Identify): 1.3 00:25:02.084 Maximum Queue Entries: 1024 00:25:02.084 Contiguous Queues Required: No 00:25:02.084 Arbitration Mechanisms Supported 00:25:02.084 Weighted Round Robin: Not Supported 00:25:02.084 Vendor Specific: Not Supported 00:25:02.084 Reset Timeout: 7500 ms 00:25:02.084 Doorbell Stride: 4 bytes 00:25:02.084 NVM Subsystem Reset: Not Supported 00:25:02.084 Command Sets Supported 00:25:02.084 NVM Command Set: Supported 00:25:02.084 Boot Partition: Not Supported 00:25:02.084 Memory Page Size Minimum: 4096 bytes 00:25:02.084 Memory Page Size Maximum: 4096 bytes 00:25:02.084 Persistent Memory Region: Not Supported 00:25:02.084 Optional Asynchronous Events Supported 00:25:02.084 Namespace Attribute Notices: Supported 00:25:02.084 Firmware Activation Notices: Not Supported 00:25:02.084 ANA Change Notices: Supported 00:25:02.084 PLE Aggregate Log Change Notices: Not Supported 00:25:02.084 LBA Status Info Alert Notices: Not Supported 00:25:02.084 EGE Aggregate Log Change Notices: Not Supported 00:25:02.084 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.084 Zone Descriptor Change Notices: Not Supported 00:25:02.084 Discovery Log Change Notices: Not Supported 00:25:02.084 Controller Attributes 00:25:02.084 128-bit Host Identifier: Supported 00:25:02.084 Non-Operational Permissive Mode: Not Supported 00:25:02.085 NVM Sets: Not Supported 00:25:02.085 Read Recovery Levels: Not Supported 00:25:02.085 Endurance Groups: Not Supported 00:25:02.085 Predictable Latency Mode: Not Supported 00:25:02.085 Traffic Based Keep ALive: Supported 00:25:02.085 Namespace Granularity: Not Supported 00:25:02.085 SQ Associations: Not Supported 00:25:02.085 UUID List: Not Supported 00:25:02.085 Multi-Domain Subsystem: Not Supported 00:25:02.085 Fixed Capacity Management: Not Supported 00:25:02.085 Variable Capacity Management: Not Supported 00:25:02.085 Delete Endurance Group: Not Supported 00:25:02.085 Delete NVM Set: Not Supported 00:25:02.085 Extended LBA Formats Supported: Not Supported 00:25:02.085 Flexible Data Placement Supported: Not Supported 00:25:02.085 00:25:02.085 Controller Memory Buffer Support 00:25:02.085 ================================ 00:25:02.085 Supported: No 00:25:02.085 00:25:02.085 Persistent Memory Region Support 00:25:02.085 ================================ 00:25:02.085 Supported: No 00:25:02.085 00:25:02.085 Admin Command Set Attributes 00:25:02.085 ============================ 00:25:02.085 Security Send/Receive: Not Supported 00:25:02.085 Format NVM: Not Supported 00:25:02.085 Firmware Activate/Download: Not Supported 00:25:02.085 Namespace Management: Not Supported 00:25:02.085 Device Self-Test: Not Supported 00:25:02.085 Directives: Not Supported 00:25:02.085 NVMe-MI: Not Supported 00:25:02.085 Virtualization Management: Not Supported 00:25:02.085 Doorbell Buffer Config: Not Supported 00:25:02.085 Get LBA Status Capability: Not Supported 00:25:02.085 Command & Feature Lockdown Capability: Not Supported 00:25:02.085 Abort Command Limit: 4 00:25:02.085 Async Event Request Limit: 4 00:25:02.085 Number of Firmware Slots: N/A 00:25:02.085 Firmware Slot 1 Read-Only: N/A 00:25:02.085 Firmware Activation Without Reset: N/A 00:25:02.085 Multiple Update Detection Support: N/A 00:25:02.085 Firmware Update Granularity: No Information Provided 00:25:02.085 Per-Namespace SMART Log: Yes 00:25:02.085 Asymmetric Namespace Access Log Page: Supported 00:25:02.085 ANA Transition Time : 10 sec 00:25:02.085 00:25:02.085 Asymmetric Namespace Access Capabilities 00:25:02.085 ANA Optimized State : Supported 00:25:02.085 ANA Non-Optimized State : Supported 00:25:02.085 ANA Inaccessible State : Supported 00:25:02.085 ANA Persistent Loss State : Supported 00:25:02.085 ANA Change State : Supported 00:25:02.085 ANAGRPID is not changed : No 00:25:02.085 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:02.085 00:25:02.085 ANA Group Identifier Maximum : 128 00:25:02.085 Number of ANA Group Identifiers : 128 00:25:02.085 Max Number of Allowed Namespaces : 1024 00:25:02.085 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:02.085 Command Effects Log Page: Supported 00:25:02.085 Get Log Page Extended Data: Supported 00:25:02.085 Telemetry Log Pages: Not Supported 00:25:02.085 Persistent Event Log Pages: Not Supported 00:25:02.085 Supported Log Pages Log Page: May Support 00:25:02.085 Commands Supported & Effects Log Page: Not Supported 00:25:02.085 Feature Identifiers & Effects Log Page:May Support 00:25:02.085 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.085 Data Area 4 for Telemetry Log: Not Supported 00:25:02.085 Error Log Page Entries Supported: 128 00:25:02.085 Keep Alive: Supported 00:25:02.085 Keep Alive Granularity: 1000 ms 00:25:02.085 00:25:02.085 NVM Command Set Attributes 00:25:02.085 ========================== 00:25:02.085 Submission Queue Entry Size 00:25:02.085 Max: 64 00:25:02.085 Min: 64 00:25:02.085 Completion Queue Entry Size 00:25:02.085 Max: 16 00:25:02.085 Min: 16 00:25:02.085 Number of Namespaces: 1024 00:25:02.085 Compare Command: Not Supported 00:25:02.085 Write Uncorrectable Command: Not Supported 00:25:02.085 Dataset Management Command: Supported 00:25:02.085 Write Zeroes Command: Supported 00:25:02.085 Set Features Save Field: Not Supported 00:25:02.085 Reservations: Not Supported 00:25:02.085 Timestamp: Not Supported 00:25:02.085 Copy: Not Supported 00:25:02.085 Volatile Write Cache: Present 00:25:02.085 Atomic Write Unit (Normal): 1 00:25:02.085 Atomic Write Unit (PFail): 1 00:25:02.085 Atomic Compare & Write Unit: 1 00:25:02.085 Fused Compare & Write: Not Supported 00:25:02.085 Scatter-Gather List 00:25:02.085 SGL Command Set: Supported 00:25:02.085 SGL Keyed: Not Supported 00:25:02.085 SGL Bit Bucket Descriptor: Not Supported 00:25:02.085 SGL Metadata Pointer: Not Supported 00:25:02.085 Oversized SGL: Not Supported 00:25:02.085 SGL Metadata Address: Not Supported 00:25:02.085 SGL Offset: Supported 00:25:02.085 Transport SGL Data Block: Not Supported 00:25:02.085 Replay Protected Memory Block: Not Supported 00:25:02.085 00:25:02.085 Firmware Slot Information 00:25:02.085 ========================= 00:25:02.085 Active slot: 0 00:25:02.085 00:25:02.085 Asymmetric Namespace Access 00:25:02.085 =========================== 00:25:02.085 Change Count : 0 00:25:02.085 Number of ANA Group Descriptors : 1 00:25:02.085 ANA Group Descriptor : 0 00:25:02.085 ANA Group ID : 1 00:25:02.085 Number of NSID Values : 1 00:25:02.085 Change Count : 0 00:25:02.085 ANA State : 1 00:25:02.085 Namespace Identifier : 1 00:25:02.085 00:25:02.085 Commands Supported and Effects 00:25:02.085 ============================== 00:25:02.085 Admin Commands 00:25:02.085 -------------- 00:25:02.085 Get Log Page (02h): Supported 00:25:02.085 Identify (06h): Supported 00:25:02.085 Abort (08h): Supported 00:25:02.085 Set Features (09h): Supported 00:25:02.085 Get Features (0Ah): Supported 00:25:02.085 Asynchronous Event Request (0Ch): Supported 00:25:02.085 Keep Alive (18h): Supported 00:25:02.085 I/O Commands 00:25:02.085 ------------ 00:25:02.085 Flush (00h): Supported 00:25:02.085 Write (01h): Supported LBA-Change 00:25:02.085 Read (02h): Supported 00:25:02.085 Write Zeroes (08h): Supported LBA-Change 00:25:02.085 Dataset Management (09h): Supported 00:25:02.085 00:25:02.085 Error Log 00:25:02.085 ========= 00:25:02.085 Entry: 0 00:25:02.085 Error Count: 0x3 00:25:02.085 Submission Queue Id: 0x0 00:25:02.085 Command Id: 0x5 00:25:02.085 Phase Bit: 0 00:25:02.085 Status Code: 0x2 00:25:02.085 Status Code Type: 0x0 00:25:02.085 Do Not Retry: 1 00:25:02.085 Error Location: 0x28 00:25:02.085 LBA: 0x0 00:25:02.085 Namespace: 0x0 00:25:02.085 Vendor Log Page: 0x0 00:25:02.085 ----------- 00:25:02.085 Entry: 1 00:25:02.085 Error Count: 0x2 00:25:02.086 Submission Queue Id: 0x0 00:25:02.086 Command Id: 0x5 00:25:02.086 Phase Bit: 0 00:25:02.086 Status Code: 0x2 00:25:02.086 Status Code Type: 0x0 00:25:02.086 Do Not Retry: 1 00:25:02.086 Error Location: 0x28 00:25:02.086 LBA: 0x0 00:25:02.086 Namespace: 0x0 00:25:02.086 Vendor Log Page: 0x0 00:25:02.086 ----------- 00:25:02.086 Entry: 2 00:25:02.086 Error Count: 0x1 00:25:02.086 Submission Queue Id: 0x0 00:25:02.086 Command Id: 0x4 00:25:02.086 Phase Bit: 0 00:25:02.086 Status Code: 0x2 00:25:02.086 Status Code Type: 0x0 00:25:02.086 Do Not Retry: 1 00:25:02.086 Error Location: 0x28 00:25:02.086 LBA: 0x0 00:25:02.086 Namespace: 0x0 00:25:02.086 Vendor Log Page: 0x0 00:25:02.086 00:25:02.086 Number of Queues 00:25:02.086 ================ 00:25:02.086 Number of I/O Submission Queues: 128 00:25:02.086 Number of I/O Completion Queues: 128 00:25:02.086 00:25:02.086 ZNS Specific Controller Data 00:25:02.086 ============================ 00:25:02.086 Zone Append Size Limit: 0 00:25:02.086 00:25:02.086 00:25:02.086 Active Namespaces 00:25:02.086 ================= 00:25:02.086 get_feature(0x05) failed 00:25:02.086 Namespace ID:1 00:25:02.086 Command Set Identifier: NVM (00h) 00:25:02.086 Deallocate: Supported 00:25:02.086 Deallocated/Unwritten Error: Not Supported 00:25:02.086 Deallocated Read Value: Unknown 00:25:02.086 Deallocate in Write Zeroes: Not Supported 00:25:02.086 Deallocated Guard Field: 0xFFFF 00:25:02.086 Flush: Supported 00:25:02.086 Reservation: Not Supported 00:25:02.086 Namespace Sharing Capabilities: Multiple Controllers 00:25:02.086 Size (in LBAs): 1953525168 (931GiB) 00:25:02.086 Capacity (in LBAs): 1953525168 (931GiB) 00:25:02.086 Utilization (in LBAs): 1953525168 (931GiB) 00:25:02.086 UUID: 8ec1a11b-5efd-4faa-84fc-da4f85cc06ee 00:25:02.086 Thin Provisioning: Not Supported 00:25:02.086 Per-NS Atomic Units: Yes 00:25:02.086 Atomic Boundary Size (Normal): 0 00:25:02.086 Atomic Boundary Size (PFail): 0 00:25:02.086 Atomic Boundary Offset: 0 00:25:02.086 NGUID/EUI64 Never Reused: No 00:25:02.086 ANA group ID: 1 00:25:02.086 Namespace Write Protected: No 00:25:02.086 Number of LBA Formats: 1 00:25:02.086 Current LBA Format: LBA Format #00 00:25:02.086 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:02.086 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.086 rmmod nvme_tcp 00:25:02.086 rmmod nvme_fabrics 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.086 22:02:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:04.618 22:02:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:07.150 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:07.150 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:07.718 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:07.718 00:25:07.718 real 0m15.593s 00:25:07.718 user 0m3.938s 00:25:07.718 sys 0m7.990s 00:25:07.718 22:03:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.718 22:03:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.718 ************************************ 00:25:07.718 END TEST nvmf_identify_kernel_target 00:25:07.718 ************************************ 00:25:07.978 22:03:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:07.978 22:03:01 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:07.978 22:03:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:07.978 22:03:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.978 22:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:07.978 ************************************ 00:25:07.978 START TEST nvmf_auth_host 00:25:07.978 ************************************ 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:07.978 * Looking for test storage... 00:25:07.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:07.978 22:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.247 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.247 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.247 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:13.248 00:25:13.248 --- 10.0.0.2 ping statistics --- 00:25:13.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.248 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:13.248 00:25:13.248 --- 10.0.0.1 ping statistics --- 00:25:13.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.248 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3823529 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3823529 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3823529 ']' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.248 22:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.184 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bca58347a3caa5972aee58df836da300 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EcW 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bca58347a3caa5972aee58df836da300 0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bca58347a3caa5972aee58df836da300 0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bca58347a3caa5972aee58df836da300 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EcW 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EcW 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EcW 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5445745c1b5fb7ae630f1af5e242c60b4e3bebf63a698f3abd9032341bbce2e2 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3vH 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5445745c1b5fb7ae630f1af5e242c60b4e3bebf63a698f3abd9032341bbce2e2 3 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5445745c1b5fb7ae630f1af5e242c60b4e3bebf63a698f3abd9032341bbce2e2 3 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5445745c1b5fb7ae630f1af5e242c60b4e3bebf63a698f3abd9032341bbce2e2 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3vH 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3vH 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3vH 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=867bfff7b619f6c92c64d8fb2a6ab6470b4eabc4caa86dfa 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UME 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 867bfff7b619f6c92c64d8fb2a6ab6470b4eabc4caa86dfa 0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 867bfff7b619f6c92c64d8fb2a6ab6470b4eabc4caa86dfa 0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=867bfff7b619f6c92c64d8fb2a6ab6470b4eabc4caa86dfa 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UME 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UME 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.UME 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6a401ce4dee1d3f5ab2315d937bf288329418c295263d8e 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vcl 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6a401ce4dee1d3f5ab2315d937bf288329418c295263d8e 2 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6a401ce4dee1d3f5ab2315d937bf288329418c295263d8e 2 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6a401ce4dee1d3f5ab2315d937bf288329418c295263d8e 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vcl 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vcl 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Vcl 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:14.185 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=706200e12f2e631cf14827e934467b6c 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.F6G 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 706200e12f2e631cf14827e934467b6c 1 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 706200e12f2e631cf14827e934467b6c 1 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=706200e12f2e631cf14827e934467b6c 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.F6G 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.F6G 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F6G 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:14.444 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=285721f7d35c5320d051b594655518fc 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L8F 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 285721f7d35c5320d051b594655518fc 1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 285721f7d35c5320d051b594655518fc 1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=285721f7d35c5320d051b594655518fc 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L8F 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L8F 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.L8F 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=696356978b5533262865faacf5919340c7dd7961064402ee 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zpt 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 696356978b5533262865faacf5919340c7dd7961064402ee 2 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 696356978b5533262865faacf5919340c7dd7961064402ee 2 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=696356978b5533262865faacf5919340c7dd7961064402ee 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zpt 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zpt 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zpt 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4ddbf56a478830d6bff3e81ba9b7a1f2 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EP7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4ddbf56a478830d6bff3e81ba9b7a1f2 0 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4ddbf56a478830d6bff3e81ba9b7a1f2 0 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4ddbf56a478830d6bff3e81ba9b7a1f2 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EP7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EP7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.EP7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8c9616272ecdc2e967bff362a70214532b9c4e4b6eefd78fab4b23b8105c5c7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6Y0 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8c9616272ecdc2e967bff362a70214532b9c4e4b6eefd78fab4b23b8105c5c7 3 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8c9616272ecdc2e967bff362a70214532b9c4e4b6eefd78fab4b23b8105c5c7 3 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8c9616272ecdc2e967bff362a70214532b9c4e4b6eefd78fab4b23b8105c5c7 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:14.445 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6Y0 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6Y0 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6Y0 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3823529 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3823529 ']' 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EcW 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3vH ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3vH 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.UME 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Vcl ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vcl 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F6G 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.L8F ]] 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8F 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.704 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zpt 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.EP7 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.EP7 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6Y0 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:14.962 22:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:14.962 22:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:14.962 22:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:17.495 Waiting for block devices as requested 00:25:17.495 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:17.495 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.754 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.754 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.754 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.754 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.012 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.012 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:18.269 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:18.269 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:18.269 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:18.269 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.527 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.527 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.527 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:19.463 No valid GPT data, bailing 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:19.463 00:25:19.463 Discovery Log Number of Records 2, Generation counter 2 00:25:19.463 =====Discovery Log Entry 0====== 00:25:19.463 trtype: tcp 00:25:19.463 adrfam: ipv4 00:25:19.463 subtype: current discovery subsystem 00:25:19.463 treq: not specified, sq flow control disable supported 00:25:19.463 portid: 1 00:25:19.463 trsvcid: 4420 00:25:19.463 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:19.463 traddr: 10.0.0.1 00:25:19.463 eflags: none 00:25:19.463 sectype: none 00:25:19.463 =====Discovery Log Entry 1====== 00:25:19.463 trtype: tcp 00:25:19.463 adrfam: ipv4 00:25:19.463 subtype: nvme subsystem 00:25:19.463 treq: not specified, sq flow control disable supported 00:25:19.463 portid: 1 00:25:19.463 trsvcid: 4420 00:25:19.463 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:19.463 traddr: 10.0.0.1 00:25:19.463 eflags: none 00:25:19.463 sectype: none 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.463 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.464 nvme0n1 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.464 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.722 nvme0n1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.722 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.723 22:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.981 22:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.981 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.981 22:03:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.981 nvme0n1 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.981 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.982 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.240 nvme0n1 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.240 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.241 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.498 nvme0n1 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.498 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.499 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.757 nvme0n1 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.757 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 nvme0n1 00:25:21.029 22:03:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.029 nvme0n1 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.029 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.331 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.332 nvme0n1 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.332 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.590 nvme0n1 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.590 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 nvme0n1 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 22:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.848 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.849 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 nvme0n1 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.106 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.107 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.364 nvme0n1 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.364 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.621 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.622 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.879 nvme0n1 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.879 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.880 22:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.138 nvme0n1 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.138 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.396 nvme0n1 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.396 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.962 nvme0n1 00:25:23.962 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.962 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.962 22:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.962 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.962 22:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:23.962 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.963 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.220 nvme0n1 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.220 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.478 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 nvme0n1 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.736 22:03:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.301 nvme0n1 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.302 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.560 nvme0n1 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.560 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.817 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.818 22:03:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.381 nvme0n1 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.381 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.382 22:03:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.947 nvme0n1 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.947 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.513 nvme0n1 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.513 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.771 22:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.335 nvme0n1 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.335 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.899 nvme0n1 00:25:28.899 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.899 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.899 22:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.899 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.899 22:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.899 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.900 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.157 nvme0n1 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.157 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.158 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.158 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.158 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 nvme0n1 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 nvme0n1 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.414 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 nvme0n1 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.673 22:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.932 nvme0n1 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.932 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.190 nvme0n1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.191 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.449 nvme0n1 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.449 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.707 nvme0n1 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.707 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.965 nvme0n1 00:25:30.965 22:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.965 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.223 nvme0n1 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.223 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.224 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.481 nvme0n1 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.481 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.482 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.739 nvme0n1 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:31.739 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.740 22:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.997 nvme0n1 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.997 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.255 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 nvme0n1 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.513 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.771 nvme0n1 00:25:32.771 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.771 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.771 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.772 22:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.029 nvme0n1 00:25:33.029 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.286 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.287 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.544 nvme0n1 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.544 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.802 22:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.060 nvme0n1 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.060 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.061 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.626 nvme0n1 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.626 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.627 22:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.885 nvme0n1 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:34.885 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.143 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 nvme0n1 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.708 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.709 22:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.273 nvme0n1 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.273 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.274 22:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.837 nvme0n1 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.837 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.094 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 nvme0n1 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.714 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.715 22:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 nvme0n1 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 nvme0n1 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.279 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.280 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.280 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.280 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.536 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.537 nvme0n1 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.537 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 nvme0n1 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.794 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.051 nvme0n1 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.051 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.309 nvme0n1 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.309 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.566 nvme0n1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.566 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.824 nvme0n1 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.824 22:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.081 nvme0n1 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.081 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.339 nvme0n1 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.339 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.617 nvme0n1 00:25:40.617 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.617 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.618 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 nvme0n1 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.875 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.876 22:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.133 nvme0n1 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.133 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.134 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.391 nvme0n1 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.391 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.649 nvme0n1 00:25:41.649 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.906 22:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.164 nvme0n1 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.164 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.729 nvme0n1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.729 22:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.986 nvme0n1 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.243 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.500 nvme0n1 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.500 22:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.066 nvme0n1 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.066 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.323 nvme0n1 00:25:44.323 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.324 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.324 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.324 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.324 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.324 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNhNTgzNDdhM2NhYTU5NzJhZWU1OGRmODM2ZGEzMDBsDZ2m: 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQ0NTc0NWMxYjVmYjdhZTYzMGYxYWY1ZTI0MmM2MGI0ZTNiZWJmNjNhNjk4ZjNhYmQ5MDMyMzQxYmJjZTJlMlT+bP4=: 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.581 22:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.147 nvme0n1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.147 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.711 nvme0n1 00:25:45.711 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.711 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.711 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzA2MjAwZTEyZjJlNjMxY2YxNDgyN2U5MzQ0NjdiNmO8efHD: 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NzIxZjdkMzVjNTMyMGQwNTFiNTk0NjU1NTE4ZmO+o0eH: 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.712 22:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.276 nvme0n1 00:25:46.534 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk2MzU2OTc4YjU1MzMyNjI4NjVmYWFjZjU5MTkzNDBjN2RkNzk2MTA2NDQwMmVl2QgvQw==: 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGRkYmY1NmE0Nzg4MzBkNmJmZjNlODFiYTliN2ExZjKITXgW: 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.535 22:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.100 nvme0n1 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.100 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjhjOTYxNjI3MmVjZGMyZTk2N2JmZjM2MmE3MDIxNDUzMmI5YzRlNGI2ZWVmZDc4ZmFiNGIyM2I4MTA1YzVjN6vEnCo=: 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.101 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.668 nvme0n1 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODY3YmZmZjdiNjE5ZjZjOTJjNjRkOGZiMmE2YWI2NDcwYjRlYWJjNGNhYTg2ZGZhYoPEUQ==: 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: ]] 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTZhNDAxY2U0ZGVlMWQzZjVhYjIzMTVkOTM3YmYyODgzMjk0MThjMjk1MjYzZDhllqaseA==: 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.669 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.928 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.928 request: 00:25:47.929 { 00:25:47.929 "name": "nvme0", 00:25:47.929 "trtype": "tcp", 00:25:47.929 "traddr": "10.0.0.1", 00:25:47.929 "adrfam": "ipv4", 00:25:47.929 "trsvcid": "4420", 00:25:47.929 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.929 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.929 "prchk_reftag": false, 00:25:47.929 "prchk_guard": false, 00:25:47.929 "hdgst": false, 00:25:47.929 "ddgst": false, 00:25:47.929 "method": "bdev_nvme_attach_controller", 00:25:47.929 "req_id": 1 00:25:47.929 } 00:25:47.929 Got JSON-RPC error response 00:25:47.929 response: 00:25:47.929 { 00:25:47.929 "code": -5, 00:25:47.929 "message": "Input/output error" 00:25:47.929 } 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.929 22:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.929 request: 00:25:47.929 { 00:25:47.929 "name": "nvme0", 00:25:47.929 "trtype": "tcp", 00:25:47.929 "traddr": "10.0.0.1", 00:25:47.929 "adrfam": "ipv4", 00:25:47.929 "trsvcid": "4420", 00:25:47.929 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:47.929 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:47.929 "prchk_reftag": false, 00:25:47.929 "prchk_guard": false, 00:25:47.929 "hdgst": false, 00:25:47.929 "ddgst": false, 00:25:47.929 "dhchap_key": "key2", 00:25:47.929 "method": "bdev_nvme_attach_controller", 00:25:47.929 "req_id": 1 00:25:47.929 } 00:25:47.929 Got JSON-RPC error response 00:25:47.929 response: 00:25:47.929 { 00:25:47.929 "code": -5, 00:25:47.929 "message": "Input/output error" 00:25:47.929 } 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.929 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.188 request: 00:25:48.188 { 00:25:48.188 "name": "nvme0", 00:25:48.188 "trtype": "tcp", 00:25:48.188 "traddr": "10.0.0.1", 00:25:48.188 "adrfam": "ipv4", 00:25:48.188 "trsvcid": "4420", 00:25:48.188 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:48.188 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.189 "prchk_reftag": false, 00:25:48.189 "prchk_guard": false, 00:25:48.189 "hdgst": false, 00:25:48.189 "ddgst": false, 00:25:48.189 "dhchap_key": "key1", 00:25:48.189 "dhchap_ctrlr_key": "ckey2", 00:25:48.189 "method": "bdev_nvme_attach_controller", 00:25:48.189 "req_id": 1 00:25:48.189 } 00:25:48.189 Got JSON-RPC error response 00:25:48.189 response: 00:25:48.189 { 00:25:48.189 "code": -5, 00:25:48.189 "message": "Input/output error" 00:25:48.189 } 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.189 rmmod nvme_tcp 00:25:48.189 rmmod nvme_fabrics 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3823529 ']' 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3823529 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3823529 ']' 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3823529 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3823529 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3823529' 00:25:48.189 killing process with pid 3823529 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3823529 00:25:48.189 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3823529 00:25:48.447 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.447 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.447 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.448 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.448 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.448 22:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.448 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.448 22:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:50.351 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:50.610 22:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:53.143 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.143 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.401 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:54.226 22:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EcW /tmp/spdk.key-null.UME /tmp/spdk.key-sha256.F6G /tmp/spdk.key-sha384.zpt /tmp/spdk.key-sha512.6Y0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:54.226 22:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:56.801 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:56.801 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:56.801 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:56.801 00:25:56.801 real 0m48.777s 00:25:56.801 user 0m43.624s 00:25:56.801 sys 0m11.271s 00:25:56.801 22:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:56.801 22:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.801 ************************************ 00:25:56.801 END TEST nvmf_auth_host 00:25:56.801 ************************************ 00:25:56.801 22:03:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:56.801 22:03:50 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:56.801 22:03:50 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:56.801 22:03:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:56.801 22:03:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.801 22:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.801 ************************************ 00:25:56.801 START TEST nvmf_digest 00:25:56.801 ************************************ 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:56.801 * Looking for test storage... 00:25:56.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.801 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.802 22:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:02.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:02.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:02.072 Found net devices under 0000:86:00.0: cvl_0_0 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:02.072 Found net devices under 0000:86:00.1: cvl_0_1 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.072 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:02.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:26:02.073 00:26:02.073 --- 10.0.0.2 ping statistics --- 00:26:02.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.073 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:26:02.073 00:26:02.073 --- 10.0.0.1 ping statistics --- 00:26:02.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.073 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.073 22:03:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.073 ************************************ 00:26:02.073 START TEST nvmf_digest_clean 00:26:02.073 ************************************ 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3836969 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3836969 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3836969 ']' 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.073 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:02.073 [2024-07-15 22:03:56.059479] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:02.073 [2024-07-15 22:03:56.059517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.073 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.073 [2024-07-15 22:03:56.116159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.073 [2024-07-15 22:03:56.193222] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.073 [2024-07-15 22:03:56.193259] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.073 [2024-07-15 22:03:56.193266] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.073 [2024-07-15 22:03:56.193272] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.073 [2024-07-15 22:03:56.193277] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.073 [2024-07-15 22:03:56.193293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.642 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.900 null0 00:26:02.900 [2024-07-15 22:03:56.961998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.900 [2024-07-15 22:03:56.986168] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3837092 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3837092 /var/tmp/bperf.sock 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3837092 ']' 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:02.900 22:03:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.900 [2024-07-15 22:03:57.037007] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:02.900 [2024-07-15 22:03:57.037054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837092 ] 00:26:02.900 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.900 [2024-07-15 22:03:57.093334] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.159 [2024-07-15 22:03:57.172972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.725 22:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.725 22:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:03.725 22:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.725 22:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.725 22:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.983 22:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.983 22:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.241 nvme0n1 00:26:04.241 22:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:04.241 22:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.499 Running I/O for 2 seconds... 00:26:06.402 00:26:06.402 Latency(us) 00:26:06.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.402 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:06.402 nvme0n1 : 2.00 26760.84 104.53 0.00 0.00 4778.02 2464.72 17438.27 00:26:06.402 =================================================================================================================== 00:26:06.402 Total : 26760.84 104.53 0.00 0.00 4778.02 2464.72 17438.27 00:26:06.402 0 00:26:06.402 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:06.402 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:06.402 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:06.402 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:06.402 | select(.opcode=="crc32c") 00:26:06.402 | "\(.module_name) \(.executed)"' 00:26:06.402 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3837092 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3837092 ']' 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3837092 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3837092 00:26:06.660 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:06.661 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:06.661 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3837092' 00:26:06.661 killing process with pid 3837092 00:26:06.661 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3837092 00:26:06.661 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.661 00:26:06.661 Latency(us) 00:26:06.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.661 =================================================================================================================== 00:26:06.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.661 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3837092 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3837689 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3837689 /var/tmp/bperf.sock 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3837689 ']' 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:06.919 22:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.919 [2024-07-15 22:04:01.009213] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:06.919 [2024-07-15 22:04:01.009266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837689 ] 00:26:06.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.919 Zero copy mechanism will not be used. 00:26:06.919 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.919 [2024-07-15 22:04:01.064557] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.919 [2024-07-15 22:04:01.143779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.856 22:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.856 22:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:07.856 22:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:07.856 22:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:07.856 22:04:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:07.856 22:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.856 22:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.114 nvme0n1 00:26:08.114 22:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:08.114 22:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.373 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.373 Zero copy mechanism will not be used. 00:26:08.373 Running I/O for 2 seconds... 00:26:10.276 00:26:10.276 Latency(us) 00:26:10.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:10.276 nvme0n1 : 2.00 4837.25 604.66 0.00 0.00 3305.04 1018.66 6838.54 00:26:10.276 =================================================================================================================== 00:26:10.276 Total : 4837.25 604.66 0.00 0.00 3305.04 1018.66 6838.54 00:26:10.276 0 00:26:10.276 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:10.276 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:10.276 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:10.276 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:10.276 | select(.opcode=="crc32c") 00:26:10.276 | "\(.module_name) \(.executed)"' 00:26:10.276 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3837689 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3837689 ']' 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3837689 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3837689 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3837689' 00:26:10.537 killing process with pid 3837689 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3837689 00:26:10.537 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.537 00:26:10.537 Latency(us) 00:26:10.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.537 =================================================================================================================== 00:26:10.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.537 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3837689 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3838386 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3838386 /var/tmp/bperf.sock 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3838386 ']' 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:10.796 22:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.796 [2024-07-15 22:04:04.898234] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:10.796 [2024-07-15 22:04:04.898282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838386 ] 00:26:10.796 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.796 [2024-07-15 22:04:04.953854] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.796 [2024-07-15 22:04:05.033396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.731 22:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.298 nvme0n1 00:26:12.298 22:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.298 22:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.298 Running I/O for 2 seconds... 00:26:14.204 00:26:14.204 Latency(us) 00:26:14.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.204 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:14.204 nvme0n1 : 2.00 27020.80 105.55 0.00 0.00 4728.88 2991.86 7807.33 00:26:14.204 =================================================================================================================== 00:26:14.204 Total : 27020.80 105.55 0.00 0.00 4728.88 2991.86 7807.33 00:26:14.204 0 00:26:14.204 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.204 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.204 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.204 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.204 | select(.opcode=="crc32c") 00:26:14.204 | "\(.module_name) \(.executed)"' 00:26:14.204 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3838386 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3838386 ']' 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3838386 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3838386 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3838386' 00:26:14.463 killing process with pid 3838386 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3838386 00:26:14.463 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.463 00:26:14.463 Latency(us) 00:26:14.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.463 =================================================================================================================== 00:26:14.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.463 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3838386 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3839085 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3839085 /var/tmp/bperf.sock 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3839085 ']' 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:14.722 22:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.722 [2024-07-15 22:04:08.878010] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:14.722 [2024-07-15 22:04:08.878058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839085 ] 00:26:14.722 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.722 Zero copy mechanism will not be used. 00:26:14.722 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.722 [2024-07-15 22:04:08.932940] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.980 [2024-07-15 22:04:09.012583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.547 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.547 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:15.547 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.547 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.547 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.806 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.806 22:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.374 nvme0n1 00:26:16.374 22:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:16.374 22:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.374 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.374 Zero copy mechanism will not be used. 00:26:16.374 Running I/O for 2 seconds... 00:26:18.275 00:26:18.275 Latency(us) 00:26:18.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.275 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:18.275 nvme0n1 : 2.00 6117.84 764.73 0.00 0.00 2611.14 1766.62 17096.35 00:26:18.275 =================================================================================================================== 00:26:18.275 Total : 6117.84 764.73 0.00 0.00 2611.14 1766.62 17096.35 00:26:18.275 0 00:26:18.276 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:18.276 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:18.276 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:18.276 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:18.276 | select(.opcode=="crc32c") 00:26:18.276 | "\(.module_name) \(.executed)"' 00:26:18.276 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3839085 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3839085 ']' 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3839085 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3839085 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3839085' 00:26:18.535 killing process with pid 3839085 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3839085 00:26:18.535 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.535 00:26:18.535 Latency(us) 00:26:18.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.535 =================================================================================================================== 00:26:18.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.535 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3839085 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3836969 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3836969 ']' 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3836969 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3836969 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3836969' 00:26:18.796 killing process with pid 3836969 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3836969 00:26:18.796 22:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3836969 00:26:19.101 00:26:19.101 real 0m17.085s 00:26:19.101 user 0m32.873s 00:26:19.101 sys 0m4.359s 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:19.101 ************************************ 00:26:19.101 END TEST nvmf_digest_clean 00:26:19.101 ************************************ 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:19.101 ************************************ 00:26:19.101 START TEST nvmf_digest_error 00:26:19.101 ************************************ 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3839804 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3839804 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3839804 ']' 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.101 22:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.101 [2024-07-15 22:04:13.213051] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:19.101 [2024-07-15 22:04:13.213093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.101 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.101 [2024-07-15 22:04:13.272542] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.359 [2024-07-15 22:04:13.343533] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.359 [2024-07-15 22:04:13.343573] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.359 [2024-07-15 22:04:13.343580] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.359 [2024-07-15 22:04:13.343586] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.359 [2024-07-15 22:04:13.343591] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.359 [2024-07-15 22:04:13.343609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.924 [2024-07-15 22:04:14.053671] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.924 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.924 null0 00:26:19.924 [2024-07-15 22:04:14.143032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.183 [2024-07-15 22:04:14.167219] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3839953 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3839953 /var/tmp/bperf.sock 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3839953 ']' 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:20.183 22:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.183 [2024-07-15 22:04:14.215470] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:20.183 [2024-07-15 22:04:14.215513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839953 ] 00:26:20.183 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.183 [2024-07-15 22:04:14.270708] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.183 [2024-07-15 22:04:14.344582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.119 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.377 nvme0n1 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.377 22:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.377 Running I/O for 2 seconds... 00:26:21.377 [2024-07-15 22:04:15.612179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.377 [2024-07-15 22:04:15.612213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.377 [2024-07-15 22:04:15.612223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.621805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.621832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.621841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.630980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.631004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.631012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.640535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.640557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.650260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.650281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.660006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.660035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.669706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.669727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.669735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.678983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.679004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.679012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.688012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.688033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.637 [2024-07-15 22:04:15.688041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.637 [2024-07-15 22:04:15.697147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.637 [2024-07-15 22:04:15.697168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.697180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.707391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.707412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.707420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.715600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.715622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.715630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.725381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.725411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.735817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.735838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.735846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.745456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.745477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.745497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.754333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.754354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.754362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.764855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.764877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.764885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.773540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.773561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.773570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.784565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.784588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.784598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.796112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.796134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.796144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.804839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.804861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.804869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.816366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.816387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.816395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.826731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.826752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.826760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.838844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.838865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.838873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.848113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.848133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.848142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.856896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.856924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.638 [2024-07-15 22:04:15.867515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.638 [2024-07-15 22:04:15.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.638 [2024-07-15 22:04:15.867547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.878392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.878416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.878426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.887674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.887695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.887703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.896756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.896777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.896784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.905937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.905958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.905966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.916666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.916687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.916695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.925089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.925111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.925119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.935361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.935382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.935390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.944565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.944586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.944594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.954633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.954665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.962541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.962561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.962569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.973515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.973536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.973544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.983874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.983895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.983903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:15.992662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:15.992682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:15.992690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.002584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.002604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.002612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.012588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.012608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.012616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.021972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.021993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.022001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.030897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.030918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.030926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.040925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.040945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.040953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.049779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.049800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.049807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.060282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.060302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.060310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.069587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.069607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.898 [2024-07-15 22:04:16.069616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.898 [2024-07-15 22:04:16.078892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.898 [2024-07-15 22:04:16.078913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.078921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.088768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.088788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.088796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.098842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.098863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.098871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.107427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.107447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.107455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.117273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.117294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.117305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.127398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.127419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.899 [2024-07-15 22:04:16.136851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:21.899 [2024-07-15 22:04:16.136873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.899 [2024-07-15 22:04:16.136881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.157 [2024-07-15 22:04:16.145657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.157 [2024-07-15 22:04:16.145678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.157 [2024-07-15 22:04:16.145686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.157 [2024-07-15 22:04:16.156079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.157 [2024-07-15 22:04:16.156100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.157 [2024-07-15 22:04:16.156109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.157 [2024-07-15 22:04:16.165030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.157 [2024-07-15 22:04:16.165050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.157 [2024-07-15 22:04:16.165058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.175053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.175074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.175082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.184998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.185027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.194425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.194445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.194453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.204761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.204785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.204793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.214025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.214053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.222594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.222615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.222623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.233021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.233043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.233050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.241929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.241950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.241958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.251708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.251728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.251737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.261272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.261303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.269779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.269800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.269808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.279797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.279817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.289410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.289433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.289441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.297970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.297991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.297999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.309452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.318610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.318631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.318640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.328006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.328034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.338427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.338448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.346981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.347002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.347010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.356450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.356471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.356479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.366381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.366404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.366413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.376704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.376725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.376733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.386795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.386817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.386825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.158 [2024-07-15 22:04:16.397377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.158 [2024-07-15 22:04:16.397398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.158 [2024-07-15 22:04:16.397406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.417 [2024-07-15 22:04:16.406021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.406042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.406051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.415897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.415916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.415925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.425841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.425861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.425870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.435924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.435945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.435953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.445265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.445285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.445293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.454637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.454658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.454666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.463698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.463719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.463726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.472340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.472360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.472369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.482033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.482052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.482060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.493302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.493322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.493330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.502570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.502591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.502599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.510668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.510688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.510695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.520901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.520922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.520931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.530395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.530415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.530427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.540318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.540340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.540350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.549548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.549569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.549577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.560115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.560135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.560143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.568802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.568823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.568831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.578134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.578154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.578162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.588836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.588856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.588864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.599253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.599281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.610138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.610158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.610166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.619484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.619508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.619516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.630050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.630070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.630078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.640860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.640881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.418 [2024-07-15 22:04:16.649361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.418 [2024-07-15 22:04:16.649382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.418 [2024-07-15 22:04:16.649390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.658841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.658862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.658872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.669365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.669385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.669393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.678943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.678964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.678972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.689888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.689908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.689916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.698606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.698627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.698636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.708803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.708823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.708831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.717857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.717877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.717885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.727714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.727736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.727743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.737282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.737312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.747794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.747814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.756320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.756340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.766440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.766460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.766468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.775988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.776008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.776016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.786235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.786255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.795810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.806004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.806026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.806034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.814848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.814869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.814878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.825663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.825685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.825693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.834839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.834861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.834869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.844697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.679 [2024-07-15 22:04:16.844719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.679 [2024-07-15 22:04:16.844727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.679 [2024-07-15 22:04:16.855098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.855121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.855129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.865852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.865873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.865882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.875341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.875361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.875369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.884841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.894135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.894156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.894164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.904632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.904654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.904662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.680 [2024-07-15 22:04:16.915384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.680 [2024-07-15 22:04:16.915405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.680 [2024-07-15 22:04:16.915413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.924807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.924828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.924837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.935105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.935127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.935136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.945575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.945596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.945605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.955798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.955820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.955832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.964266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.964287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.974895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.974923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.983637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.983658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.983667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:16.994283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:16.994304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:16.994312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.004087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.004116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.013688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.013717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.022987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.023008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.023016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.032746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.032768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.032776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.042167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.042191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.042199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.050610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.050631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.050639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.060676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.060696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.060704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.069506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.069526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.069534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.080148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.080171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.080179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.089177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.089197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.089206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.940 [2024-07-15 22:04:17.098445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.940 [2024-07-15 22:04:17.098466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.940 [2024-07-15 22:04:17.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.108289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.108310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.108318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.117472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.117493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.117501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.127443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.127465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.127473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.136349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.136370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.136378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.145957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.145986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.156014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.156035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.156043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.164644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.164665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.941 [2024-07-15 22:04:17.175269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:22.941 [2024-07-15 22:04:17.175290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.941 [2024-07-15 22:04:17.175298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.184832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.184854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.184864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.194121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.194142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.194150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.202950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.202972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.202983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.213801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.223148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.223169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.223177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.232632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.232653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.232661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.243346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.243366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.243374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.251764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.251784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.251792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.261047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.261068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.271950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.271971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.271980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.280876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.280896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.280904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.290076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.290096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.290104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.299240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.299260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.299268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.309835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.309858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.309867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.319239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.319260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.319268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.329399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.329420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.329428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.339216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.339242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.339250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.348736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.348757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.348766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.359179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.359201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.359210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.367834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.367867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.378232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.378252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.387783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.387804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.387812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.396469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.200 [2024-07-15 22:04:17.396489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.200 [2024-07-15 22:04:17.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.200 [2024-07-15 22:04:17.406939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.201 [2024-07-15 22:04:17.406959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.201 [2024-07-15 22:04:17.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.201 [2024-07-15 22:04:17.415162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.201 [2024-07-15 22:04:17.415183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.201 [2024-07-15 22:04:17.415191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.201 [2024-07-15 22:04:17.425538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.201 [2024-07-15 22:04:17.425559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.201 [2024-07-15 22:04:17.425568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.201 [2024-07-15 22:04:17.434884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.201 [2024-07-15 22:04:17.434905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.201 [2024-07-15 22:04:17.434912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.445400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.445422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.445431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.455395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.455418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.455426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.465594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.465614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.465622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.473992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.474013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.474020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.484578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.484599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.484607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.494163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.494183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.494192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.502674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.502694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.502703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.514044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.514065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.459 [2024-07-15 22:04:17.514073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.459 [2024-07-15 22:04:17.524487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.459 [2024-07-15 22:04:17.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.524516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.532941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.532961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.532969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.544277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.544299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.544307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.553608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.553630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.553640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.562946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.562968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.562976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.573453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.573473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.582565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.582586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.582594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 [2024-07-15 22:04:17.592981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9e130) 00:26:23.460 [2024-07-15 22:04:17.593001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.460 [2024-07-15 22:04:17.593009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.460 00:26:23.460 Latency(us) 00:26:23.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.460 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:23.460 nvme0n1 : 2.00 26200.79 102.35 0.00 0.00 4879.70 1852.10 14019.01 00:26:23.460 =================================================================================================================== 00:26:23.460 Total : 26200.79 102.35 0.00 0.00 4879.70 1852.10 14019.01 00:26:23.460 0 00:26:23.460 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.460 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.460 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.460 | .driver_specific 00:26:23.460 | .nvme_error 00:26:23.460 | .status_code 00:26:23.460 | .command_transient_transport_error' 00:26:23.460 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3839953 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3839953 ']' 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3839953 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3839953 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3839953' 00:26:23.717 killing process with pid 3839953 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3839953 00:26:23.717 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.717 00:26:23.717 Latency(us) 00:26:23.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.717 =================================================================================================================== 00:26:23.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.717 22:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3839953 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3840537 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3840537 /var/tmp/bperf.sock 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3840537 ']' 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.975 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.975 [2024-07-15 22:04:18.068175] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:23.975 [2024-07-15 22:04:18.068223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840537 ] 00:26:23.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.975 Zero copy mechanism will not be used. 00:26:23.975 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.975 [2024-07-15 22:04:18.122676] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.975 [2024-07-15 22:04:18.202148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.910 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.910 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:24.910 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.910 22:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.910 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.169 nvme0n1 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.451 22:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.451 Zero copy mechanism will not be used. 00:26:25.451 Running I/O for 2 seconds... 00:26:25.451 [2024-07-15 22:04:19.528588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.451 [2024-07-15 22:04:19.528621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.451 [2024-07-15 22:04:19.528631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.451 [2024-07-15 22:04:19.538989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.451 [2024-07-15 22:04:19.539016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.451 [2024-07-15 22:04:19.539025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.451 [2024-07-15 22:04:19.549020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.451 [2024-07-15 22:04:19.549044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.451 [2024-07-15 22:04:19.549052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.451 [2024-07-15 22:04:19.557790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.451 [2024-07-15 22:04:19.557819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.451 [2024-07-15 22:04:19.557828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.451 [2024-07-15 22:04:19.568510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.451 [2024-07-15 22:04:19.568533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.451 [2024-07-15 22:04:19.568542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.451 [2024-07-15 22:04:19.577063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.577086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.577095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.586641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.586667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.586676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.596792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.596815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.596824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.607623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.607656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.618588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.618612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.618621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.629344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.629375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.638724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.638748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.638756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.647746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.647770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.647779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.656822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.656845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.656853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.665800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.665822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.665831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.675660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.675683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.675691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.452 [2024-07-15 22:04:19.685007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.452 [2024-07-15 22:04:19.685032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.452 [2024-07-15 22:04:19.685041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.694345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.694371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.694379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.704993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.705017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.705026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.715927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.715949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.715958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.726347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.726371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.726383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.737268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.737292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.737300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.747293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.747316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.747325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.757758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.757781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.757789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.767817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.767840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.767849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.778526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.778548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.778557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.788431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.788462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.798933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.798956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.798965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.810007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.810039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.818329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.818356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.818364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.827489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.827512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.712 [2024-07-15 22:04:19.827520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.712 [2024-07-15 22:04:19.836818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.712 [2024-07-15 22:04:19.836840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.836849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.845139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.845162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.845170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.853272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.853294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.853301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.860879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.860902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.860910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.868769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.868792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.868800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.876456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.876478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.876486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.884022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.884054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.891594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.891618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.891627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.899462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.899493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.907392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.907424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.914292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.914315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.914323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.923269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.923291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.923301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.932908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.932932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.941524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.941548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.941556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.713 [2024-07-15 22:04:19.952051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.713 [2024-07-15 22:04:19.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.713 [2024-07-15 22:04:19.952084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:19.961791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:19.961815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:19.961831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:19.970488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:19.970510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:19.970518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:19.978772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:19.978795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:19.978803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:19.986572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:19.986594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:19.986602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:19.995291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:19.995313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:19.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.003524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.003547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.003556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.012656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.012679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.012687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.020086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.020109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.020118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.027158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.027181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.027189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.034118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.034141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.034150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.040953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.040975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.040983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.047548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.047570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.047578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.056988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.057013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.057023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.063334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.063355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.063364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.069304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.069327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.069336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.075183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.075214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.081052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.081074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.081083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.973 [2024-07-15 22:04:20.087074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.973 [2024-07-15 22:04:20.087096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.973 [2024-07-15 22:04:20.087109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.093024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.093046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.093054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.098918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.098940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.098948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.104883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.104904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.104912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.110776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.110798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.110806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.116692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.116713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.116721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.122483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.122504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.122512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.128231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.128253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.134027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.134056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.139950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.139975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.139983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.145630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.145652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.145660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.151404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.151426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.151433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.157333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.157355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.157363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.163164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.163185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.163193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.168859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.168880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.168888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.174673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.174702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.180571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.180592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.180600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.186296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.186318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.186326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.192012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.192034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.192042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.197923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.197944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.197952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.203743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.203763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.203771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.974 [2024-07-15 22:04:20.209436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:25.974 [2024-07-15 22:04:20.209457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.974 [2024-07-15 22:04:20.209465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.215297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.215320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.221291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.226980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.227001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.232696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.232718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.232725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.238603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.238624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.238636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.244440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.244461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.244469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.250168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.250189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.250197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.256601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.256624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.256633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.262516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.262538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.262546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.268470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.268493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.268501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.274292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.274321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.280151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.280172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.280180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.234 [2024-07-15 22:04:20.286052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.234 [2024-07-15 22:04:20.286075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.234 [2024-07-15 22:04:20.286083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.292646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.292673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.292682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.299137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.299159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.299167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.305267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.305288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.305295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.311081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.311103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.311111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.317105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.317124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.317132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.323116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.323137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.323145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.329072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.329102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.334938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.334960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.334969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.340734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.340757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.340765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.346671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.346693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.346701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.352598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.352620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.352628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.358394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.358427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.358435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.364594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.364620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.364631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.370432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.370455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.370463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.376087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.376109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.376117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.381752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.381774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.381781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.387460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.387482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.393199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.393221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.393240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.399156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.399178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.399186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.404273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.404294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.407654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.407675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.407683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.413496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.413517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.413526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.419333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.419353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.419361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.425025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.425046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.425056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.431175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.431195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.431203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.438819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.438840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.438848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.446862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.235 [2024-07-15 22:04:20.446892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.235 [2024-07-15 22:04:20.454306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.235 [2024-07-15 22:04:20.454327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.236 [2024-07-15 22:04:20.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.236 [2024-07-15 22:04:20.461494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.236 [2024-07-15 22:04:20.461514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.236 [2024-07-15 22:04:20.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.236 [2024-07-15 22:04:20.468865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.236 [2024-07-15 22:04:20.468885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.236 [2024-07-15 22:04:20.468893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.475385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.475407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.475417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.481815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.481836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.481845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.488142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.488163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.494293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.494313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.494321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.500277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.500297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.500309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.506239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.506260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.506268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.512316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.512336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.512344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.518278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.518298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.524138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.524159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.524167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.529864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.529884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.529892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.535668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.535688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.535697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.541408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.496 [2024-07-15 22:04:20.541429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.496 [2024-07-15 22:04:20.541438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.496 [2024-07-15 22:04:20.547159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.547179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.547187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.553585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.553609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.553617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.559497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.559517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.559525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.564615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.564636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.564644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.570294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.570315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.570323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.575920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.575940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.575948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.581610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.581631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.581640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.587396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.587417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.587425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.593150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.593170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.593178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.598927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.598948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.598956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.604730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.604760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.610779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.610801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.617311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.617340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.625088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.625109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.625118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.633325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.633346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.633354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.641566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.641587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.641596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.650188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.650210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.650218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.658866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.658896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.667562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.667583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.667594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.676352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.676374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.676382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.684832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.684853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.684862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.693493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.693514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.693522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.702428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.711574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.711595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.711604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.720048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.720069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.720077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.497 [2024-07-15 22:04:20.729113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.497 [2024-07-15 22:04:20.729135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.497 [2024-07-15 22:04:20.729143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.737747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.737770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.737779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.747594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.747620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.747628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.756088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.756108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.756116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.763809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.763830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.763838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.771129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.771157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.777887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.777906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.777914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.784315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.784335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.784342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.790707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.790727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.790735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.797070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.797090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.757 [2024-07-15 22:04:20.797098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.757 [2024-07-15 22:04:20.803364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.757 [2024-07-15 22:04:20.803384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.803392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.809550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.809570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.809578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.815512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.815532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.815540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.821410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.821429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.821437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.827253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.827273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.827281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.833100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.833121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.833129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.839054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.839075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.839083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.845345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.845367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.845376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.853177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.853197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.853205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.863136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.863171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.871944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.871965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.871973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.880356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.880377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.880385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.888543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.888574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.888583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.895676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.895697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.902663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.902683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.902691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.909256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.909275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.909283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.915769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.915789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.915797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.923720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.923749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.934067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.934092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.934100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.944008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.944029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.944037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.953053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.953074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.953082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.961615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.961635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.758 [2024-07-15 22:04:20.961643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.758 [2024-07-15 22:04:20.969594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.758 [2024-07-15 22:04:20.969613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.759 [2024-07-15 22:04:20.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.759 [2024-07-15 22:04:20.980121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.759 [2024-07-15 22:04:20.980142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.759 [2024-07-15 22:04:20.980150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.759 [2024-07-15 22:04:20.989709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:26.759 [2024-07-15 22:04:20.989728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.759 [2024-07-15 22:04:20.989736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:20.998404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:20.998426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:20.998434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.007249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.007274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.007282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.016241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.016263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.016271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.026554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.026575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.026583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.037088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.037109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.045962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.045984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.045992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.055113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.055134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.055142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.064235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.064256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.064264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.074850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.074872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.074881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.084746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.084768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.094843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.094869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.094878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.104526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.104550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.104559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.113963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.113985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.113994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.123307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.123331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.133278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.133300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.133309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.142818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.142849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.152582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.152603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.152612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.161712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.161734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.161742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.171312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.171335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.171343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.182883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.182905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.182913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.192191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.192212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.192221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.201172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.201194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.201202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.212283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.212305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.212313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.222204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.222231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.222240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.232350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.232372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.232381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.243517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.243539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.243547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.019 [2024-07-15 22:04:21.253909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.019 [2024-07-15 22:04:21.253931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.019 [2024-07-15 22:04:21.253939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.279 [2024-07-15 22:04:21.264843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.279 [2024-07-15 22:04:21.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.279 [2024-07-15 22:04:21.264880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.279 [2024-07-15 22:04:21.275076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.275099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.285854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.285875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.285883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.294805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.294827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.294835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.303111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.303134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.312366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.312389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.312397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.321378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.321400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.321408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.330546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.330569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.330577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.340932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.340954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.352033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.352060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.352068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.361817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.361839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.361847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.372852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.372874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.372882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.383141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.383165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.383174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.393879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.393903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.393911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.404678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.404708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.414518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.414539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.414547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.424898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.424920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.424928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.436093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.436116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.436124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.446863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.446885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.456496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.456518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.456527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.465525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.465547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.465555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.474086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.474107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.474115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.483634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.483654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.483662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.491407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.491428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.491436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.499234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.499254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.499263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.507060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.507081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.507089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.280 [2024-07-15 22:04:21.513998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.280 [2024-07-15 22:04:21.514018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.280 [2024-07-15 22:04:21.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.540 [2024-07-15 22:04:21.521301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b1120) 00:26:27.540 [2024-07-15 22:04:21.521323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.540 [2024-07-15 22:04:21.521331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.540 00:26:27.540 Latency(us) 00:26:27.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.540 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:27.540 nvme0n1 : 2.00 3951.28 493.91 0.00 0.00 4045.73 1018.66 12822.26 00:26:27.540 =================================================================================================================== 00:26:27.540 Total : 3951.28 493.91 0.00 0.00 4045.73 1018.66 12822.26 00:26:27.540 0 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.540 | .driver_specific 00:26:27.540 | .nvme_error 00:26:27.540 | .status_code 00:26:27.540 | .command_transient_transport_error' 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 255 > 0 )) 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3840537 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3840537 ']' 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3840537 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3840537 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:27.540 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3840537' 00:26:27.540 killing process with pid 3840537 00:26:27.541 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3840537 00:26:27.541 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.541 00:26:27.541 Latency(us) 00:26:27.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.541 =================================================================================================================== 00:26:27.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.541 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3840537 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3841234 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3841234 /var/tmp/bperf.sock 00:26:27.799 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3841234 ']' 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:27.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:27.800 22:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.800 [2024-07-15 22:04:21.995417] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:27.800 [2024-07-15 22:04:21.995469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841234 ] 00:26:27.800 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.058 [2024-07-15 22:04:22.050552] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.058 [2024-07-15 22:04:22.118701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.626 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.626 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:28.626 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.626 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.886 22:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.146 nvme0n1 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.146 22:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.404 Running I/O for 2 seconds... 00:26:29.404 [2024-07-15 22:04:23.443316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7100 00:26:29.404 [2024-07-15 22:04:23.444117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.444148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.452920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9b30 00:26:29.404 [2024-07-15 22:04:23.453599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.453622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.461372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fd640 00:26:29.404 [2024-07-15 22:04:23.462149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.462168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.471631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ecc78 00:26:29.404 [2024-07-15 22:04:23.472528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.472548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.480737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190edd58 00:26:29.404 [2024-07-15 22:04:23.481668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.481688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.489908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb048 00:26:29.404 [2024-07-15 22:04:23.490834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.490853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.499009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9f68 00:26:29.404 [2024-07-15 22:04:23.499963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.499982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.508022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8e88 00:26:29.404 [2024-07-15 22:04:23.508935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.508955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.517077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7da8 00:26:29.404 [2024-07-15 22:04:23.518037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.518059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.526196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e4de8 00:26:29.404 [2024-07-15 22:04:23.527132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.527151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.535288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3d08 00:26:29.404 [2024-07-15 22:04:23.536221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.536243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.544436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb480 00:26:29.404 [2024-07-15 22:04:23.545386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.545407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.553524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fc560 00:26:29.404 [2024-07-15 22:04:23.554443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.554462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.562660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190feb58 00:26:29.404 [2024-07-15 22:04:23.563593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.404 [2024-07-15 22:04:23.563611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.404 [2024-07-15 22:04:23.571765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190de038 00:26:29.405 [2024-07-15 22:04:23.572675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.580815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df118 00:26:29.405 [2024-07-15 22:04:23.581756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.581774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.589955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e01f8 00:26:29.405 [2024-07-15 22:04:23.590889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.590907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.599003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fd208 00:26:29.405 [2024-07-15 22:04:23.599947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.599966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.608257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fe720 00:26:29.405 [2024-07-15 22:04:23.609207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.609230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.617648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ec840 00:26:29.405 [2024-07-15 22:04:23.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.618634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.626750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed920 00:26:29.405 [2024-07-15 22:04:23.627701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.627720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.635829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eea00 00:26:29.405 [2024-07-15 22:04:23.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.405 [2024-07-15 22:04:23.636799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.405 [2024-07-15 22:04:23.645115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fa3a0 00:26:29.663 [2024-07-15 22:04:23.646046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.646066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.654349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f92c0 00:26:29.663 [2024-07-15 22:04:23.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.655316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.663474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f81e0 00:26:29.663 [2024-07-15 22:04:23.664377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.664397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.672394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7100 00:26:29.663 [2024-07-15 22:04:23.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.673341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.681449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e4140 00:26:29.663 [2024-07-15 22:04:23.682374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.682393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.690577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e88f8 00:26:29.663 [2024-07-15 22:04:23.691483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.691502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.699641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb8b8 00:26:29.663 [2024-07-15 22:04:23.700483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.700502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.708863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fc998 00:26:29.663 [2024-07-15 22:04:23.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.709708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.718241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5be8 00:26:29.663 [2024-07-15 22:04:23.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.719264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.727372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4b08 00:26:29.663 [2024-07-15 22:04:23.728346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.728364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.736358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f3a28 00:26:29.663 [2024-07-15 22:04:23.737381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.737399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.745493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f2948 00:26:29.663 [2024-07-15 22:04:23.746542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.746561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.754557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5658 00:26:29.663 [2024-07-15 22:04:23.755570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.755591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.763679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6738 00:26:29.663 [2024-07-15 22:04:23.764704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.764723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.772719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e7818 00:26:29.663 [2024-07-15 22:04:23.773748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.773766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.781894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ee190 00:26:29.663 [2024-07-15 22:04:23.782951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.791008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fac10 00:26:29.663 [2024-07-15 22:04:23.792060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.792079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.800051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9b30 00:26:29.663 [2024-07-15 22:04:23.801066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.801084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.809099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8a50 00:26:29.663 [2024-07-15 22:04:23.810128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.810146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.818239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7970 00:26:29.663 [2024-07-15 22:04:23.819270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.827294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eaef0 00:26:29.663 [2024-07-15 22:04:23.828334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.828352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.836397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ebfd0 00:26:29.663 [2024-07-15 22:04:23.837453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.837472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.845515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0ff8 00:26:29.663 [2024-07-15 22:04:23.846463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.846482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.663 [2024-07-15 22:04:23.854620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f20d8 00:26:29.663 [2024-07-15 22:04:23.855569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.663 [2024-07-15 22:04:23.855588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.664 [2024-07-15 22:04:23.863759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6020 00:26:29.664 [2024-07-15 22:04:23.864703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.664 [2024-07-15 22:04:23.864722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.664 [2024-07-15 22:04:23.872833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4f40 00:26:29.664 [2024-07-15 22:04:23.873885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.664 [2024-07-15 22:04:23.873903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.664 [2024-07-15 22:04:23.881895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f3e60 00:26:29.664 [2024-07-15 22:04:23.882970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.664 [2024-07-15 22:04:23.882988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.664 [2024-07-15 22:04:23.891010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f2d80 00:26:29.664 [2024-07-15 22:04:23.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.664 [2024-07-15 22:04:23.892054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.664 [2024-07-15 22:04:23.900142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5220 00:26:29.664 [2024-07-15 22:04:23.901200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.664 [2024-07-15 22:04:23.901219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.909471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6300 00:26:29.922 [2024-07-15 22:04:23.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.910523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.918567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e73e0 00:26:29.922 [2024-07-15 22:04:23.919610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.927983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e84c0 00:26:29.922 [2024-07-15 22:04:23.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.937409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ee5c8 00:26:29.922 [2024-07-15 22:04:23.938486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.946776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fa7d8 00:26:29.922 [2024-07-15 22:04:23.947848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.947868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.955942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f96f8 00:26:29.922 [2024-07-15 22:04:23.956940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.956958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.965316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8618 00:26:29.922 [2024-07-15 22:04:23.966274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.966294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.974413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7538 00:26:29.922 [2024-07-15 22:04:23.975411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.975430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.983572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eb328 00:26:29.922 [2024-07-15 22:04:23.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:23.992988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0350 00:26:29.922 [2024-07-15 22:04:23.994056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:23.994079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.002244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f1430 00:26:29.922 [2024-07-15 22:04:24.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.003261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.011393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6cc8 00:26:29.922 [2024-07-15 22:04:24.012406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.012425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.020515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5be8 00:26:29.922 [2024-07-15 22:04:24.021473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.021491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.029003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ef270 00:26:29.922 [2024-07-15 22:04:24.029917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.029936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.038558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fc998 00:26:29.922 [2024-07-15 22:04:24.039590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.039609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.048107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7538 00:26:29.922 [2024-07-15 22:04:24.049257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.049276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.057625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4b08 00:26:29.922 [2024-07-15 22:04:24.058908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.058926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.067181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e23b8 00:26:29.922 [2024-07-15 22:04:24.068588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.068608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.076668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed4e8 00:26:29.922 [2024-07-15 22:04:24.078208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.078230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.083108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f31b8 00:26:29.922 [2024-07-15 22:04:24.083792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.083811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.092374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e8d30 00:26:29.922 [2024-07-15 22:04:24.093037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.093056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.101598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e23b8 00:26:29.922 [2024-07-15 22:04:24.102121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.922 [2024-07-15 22:04:24.102140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.922 [2024-07-15 22:04:24.111005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f96f8 00:26:29.922 [2024-07-15 22:04:24.111796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.111815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.923 [2024-07-15 22:04:24.120051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f31b8 00:26:29.923 [2024-07-15 22:04:24.120835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.120854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.923 [2024-07-15 22:04:24.128990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9f68 00:26:29.923 [2024-07-15 22:04:24.129767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.923 [2024-07-15 22:04:24.137374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3498 00:26:29.923 [2024-07-15 22:04:24.138131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.138149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.923 [2024-07-15 22:04:24.146898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ee190 00:26:29.923 [2024-07-15 22:04:24.147787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.147807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.923 [2024-07-15 22:04:24.156411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6b70 00:26:29.923 [2024-07-15 22:04:24.157407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.923 [2024-07-15 22:04:24.157426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.166179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f1430 00:26:30.182 [2024-07-15 22:04:24.167317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.167336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.175690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3498 00:26:30.182 [2024-07-15 22:04:24.176940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.176959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.185278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6300 00:26:30.182 [2024-07-15 22:04:24.186647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.186666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.194799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e23b8 00:26:30.182 [2024-07-15 22:04:24.196304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.196323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.201212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed0b0 00:26:30.182 [2024-07-15 22:04:24.201855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.201874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.210463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6738 00:26:30.182 [2024-07-15 22:04:24.211108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.211128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.219092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7100 00:26:30.182 [2024-07-15 22:04:24.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.219738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.228653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eff18 00:26:30.182 [2024-07-15 22:04:24.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.229434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.238350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed4e8 00:26:30.182 [2024-07-15 22:04:24.239233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.239252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.247847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fcdd0 00:26:30.182 [2024-07-15 22:04:24.248850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.248869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.257369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fd640 00:26:30.182 [2024-07-15 22:04:24.258492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.258511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.266894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eff18 00:26:30.182 [2024-07-15 22:04:24.268140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.268158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.276382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e12d8 00:26:30.182 [2024-07-15 22:04:24.277735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.277754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.285962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e4578 00:26:30.182 [2024-07-15 22:04:24.287456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.287474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.292395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e84c0 00:26:30.182 [2024-07-15 22:04:24.293030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.301550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e88f8 00:26:30.182 [2024-07-15 22:04:24.302174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.302194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.310916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f46d0 00:26:30.182 [2024-07-15 22:04:24.311424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.311446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.320345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e01f8 00:26:30.182 [2024-07-15 22:04:24.321094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.321113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.328675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1b48 00:26:30.182 [2024-07-15 22:04:24.329413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.338243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f2510 00:26:30.182 [2024-07-15 22:04:24.339101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.339119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.347764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f2948 00:26:30.182 [2024-07-15 22:04:24.348762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.348781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.357317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fda78 00:26:30.182 [2024-07-15 22:04:24.358428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.358446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.366841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1b48 00:26:30.182 [2024-07-15 22:04:24.368066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.368084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.376320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190de038 00:26:30.182 [2024-07-15 22:04:24.377676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.385876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fbcf0 00:26:30.182 [2024-07-15 22:04:24.387349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.392282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df550 00:26:30.182 [2024-07-15 22:04:24.392922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.392941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.401411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb8b8 00:26:30.182 [2024-07-15 22:04:24.402021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.402039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.410501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190dece0 00:26:30.182 [2024-07-15 22:04:24.411115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.182 [2024-07-15 22:04:24.411133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.182 [2024-07-15 22:04:24.419763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e27f0 00:26:30.183 [2024-07-15 22:04:24.420377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.183 [2024-07-15 22:04:24.420396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.441 [2024-07-15 22:04:24.428190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190de8a8 00:26:30.441 [2024-07-15 22:04:24.428795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.441 [2024-07-15 22:04:24.428814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.441 [2024-07-15 22:04:24.437747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fe720 00:26:30.441 [2024-07-15 22:04:24.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.441 [2024-07-15 22:04:24.438482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.441 [2024-07-15 22:04:24.447230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed0b0 00:26:30.441 [2024-07-15 22:04:24.448067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.441 [2024-07-15 22:04:24.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.441 [2024-07-15 22:04:24.456776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5be8 00:26:30.441 [2024-07-15 22:04:24.457728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.441 [2024-07-15 22:04:24.457747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.441 [2024-07-15 22:04:24.466289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:30.442 [2024-07-15 22:04:24.467372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.467390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.475925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fe720 00:26:30.442 [2024-07-15 22:04:24.477143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.477161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.485480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb480 00:26:30.442 [2024-07-15 22:04:24.486821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.486839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.494942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ecc78 00:26:30.442 [2024-07-15 22:04:24.496392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.496410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.501348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4b08 00:26:30.442 [2024-07-15 22:04:24.501962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.501981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.510915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f31b8 00:26:30.442 [2024-07-15 22:04:24.511659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.511679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.520413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eaab8 00:26:30.442 [2024-07-15 22:04:24.521259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.521278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.529611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6458 00:26:30.442 [2024-07-15 22:04:24.530444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.530462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.538759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f96f8 00:26:30.442 [2024-07-15 22:04:24.539580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.539598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.547058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e9e10 00:26:30.442 [2024-07-15 22:04:24.547887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.547908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.556600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f31b8 00:26:30.442 [2024-07-15 22:04:24.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.557582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.566082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eaab8 00:26:30.442 [2024-07-15 22:04:24.567151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.567169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.575588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e7818 00:26:30.442 [2024-07-15 22:04:24.576776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.576795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.585121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5ec8 00:26:30.442 [2024-07-15 22:04:24.586434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.586453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.594588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df118 00:26:30.442 [2024-07-15 22:04:24.596050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.596068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.601016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190feb58 00:26:30.442 [2024-07-15 22:04:24.601604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.601622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.610212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fe720 00:26:30.442 [2024-07-15 22:04:24.610792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.619353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eff18 00:26:30.442 [2024-07-15 22:04:24.619926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.629437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5ec8 00:26:30.442 [2024-07-15 22:04:24.630753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.630772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.637251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6738 00:26:30.442 [2024-07-15 22:04:24.637932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.637950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.646747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df988 00:26:30.442 [2024-07-15 22:04:24.647558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.647576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.656348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6458 00:26:30.442 [2024-07-15 22:04:24.657269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.657287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.665839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4298 00:26:30.442 [2024-07-15 22:04:24.666895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.666913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.442 [2024-07-15 22:04:24.675422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eff18 00:26:30.442 [2024-07-15 22:04:24.676611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.442 [2024-07-15 22:04:24.676629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.701 [2024-07-15 22:04:24.685030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f92c0 00:26:30.701 [2024-07-15 22:04:24.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.701 [2024-07-15 22:04:24.686363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.701 [2024-07-15 22:04:24.694532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e38d0 00:26:30.701 [2024-07-15 22:04:24.695980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.701 [2024-07-15 22:04:24.695999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.701 [2024-07-15 22:04:24.700977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb048 00:26:30.701 [2024-07-15 22:04:24.701565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.701 [2024-07-15 22:04:24.701584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.701 [2024-07-15 22:04:24.710526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7da8 00:26:30.701 [2024-07-15 22:04:24.711232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.711251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.719666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0788 00:26:30.702 [2024-07-15 22:04:24.720366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.720385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.728949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0788 00:26:30.702 [2024-07-15 22:04:24.729649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.729667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.737461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e01f8 00:26:30.702 [2024-07-15 22:04:24.738141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.738159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.746948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ebfd0 00:26:30.702 [2024-07-15 22:04:24.747759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.747778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.756499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:30.702 [2024-07-15 22:04:24.757413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.757431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.765831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0ff8 00:26:30.702 [2024-07-15 22:04:24.766867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.766885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.775367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4f40 00:26:30.702 [2024-07-15 22:04:24.776532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.784874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f3a28 00:26:30.702 [2024-07-15 22:04:24.786161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.786182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.794361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ddc00 00:26:30.702 [2024-07-15 22:04:24.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.795791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.804093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5378 00:26:30.702 [2024-07-15 22:04:24.805624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.805642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.810504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6738 00:26:30.702 [2024-07-15 22:04:24.811212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.811234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.819681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190feb58 00:26:30.702 [2024-07-15 22:04:24.820366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.820385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.828890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb048 00:26:30.702 [2024-07-15 22:04:24.829555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.829574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.837210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6890 00:26:30.702 [2024-07-15 22:04:24.837889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.837908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.846714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ddc00 00:26:30.702 [2024-07-15 22:04:24.847492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.847510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.856264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ed4e8 00:26:30.702 [2024-07-15 22:04:24.857179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.857197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.865732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190feb58 00:26:30.702 [2024-07-15 22:04:24.866758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.866776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.875288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e2c28 00:26:30.702 [2024-07-15 22:04:24.876439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.876457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.884777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ddc00 00:26:30.702 [2024-07-15 22:04:24.886046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.886064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.894271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e9168 00:26:30.702 [2024-07-15 22:04:24.895664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.895682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.903824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e6738 00:26:30.702 [2024-07-15 22:04:24.905348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.905367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.910245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6020 00:26:30.702 [2024-07-15 22:04:24.910940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.910959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.919727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8e88 00:26:30.702 [2024-07-15 22:04:24.920521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.929461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190de470 00:26:30.702 [2024-07-15 22:04:24.930393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.930412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.702 [2024-07-15 22:04:24.939006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0ff8 00:26:30.702 [2024-07-15 22:04:24.940056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.702 [2024-07-15 22:04:24.940075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.948490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e0630 00:26:30.962 [2024-07-15 22:04:24.949526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.949545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.956908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e2c28 00:26:30.962 [2024-07-15 22:04:24.958187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.958206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.964712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:30.962 [2024-07-15 22:04:24.965377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.965396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.974271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb480 00:26:30.962 [2024-07-15 22:04:24.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.975064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.983936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7538 00:26:30.962 [2024-07-15 22:04:24.984851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.984870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:24.993418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e9168 00:26:30.962 [2024-07-15 22:04:24.994443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:24.994461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:25.002979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eaab8 00:26:30.962 [2024-07-15 22:04:25.004141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:25.004160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:25.012465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb480 00:26:30.962 [2024-07-15 22:04:25.013744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:25.013764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:25.022017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0bc0 00:26:30.962 [2024-07-15 22:04:25.023419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:25.023441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:25.031539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3498 00:26:30.962 [2024-07-15 22:04:25.033051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:25.033070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.962 [2024-07-15 22:04:25.037940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f6cc8 00:26:30.962 [2024-07-15 22:04:25.038607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.962 [2024-07-15 22:04:25.038625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.047479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df550 00:26:30.963 [2024-07-15 22:04:25.048274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.057016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5be8 00:26:30.963 [2024-07-15 22:04:25.057933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.057952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.066153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f3a28 00:26:30.963 [2024-07-15 22:04:25.067070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.067089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.075527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fb048 00:26:30.963 [2024-07-15 22:04:25.076294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.076313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.084871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ddc00 00:26:30.963 [2024-07-15 22:04:25.085878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.085897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.093186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e99d8 00:26:30.963 [2024-07-15 22:04:25.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.094231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.102666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f7da8 00:26:30.963 [2024-07-15 22:04:25.103750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.103770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.112081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ef6a8 00:26:30.963 [2024-07-15 22:04:25.113336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.113354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.121638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8618 00:26:30.963 [2024-07-15 22:04:25.123023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.123046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.131135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f5378 00:26:30.963 [2024-07-15 22:04:25.132642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.132660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.137545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5a90 00:26:30.963 [2024-07-15 22:04:25.138179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.138197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.146829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e9e10 00:26:30.963 [2024-07-15 22:04:25.147498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.156258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8618 00:26:30.963 [2024-07-15 22:04:25.157008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.157027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.165416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0350 00:26:30.963 [2024-07-15 22:04:25.166172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.166191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.174788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:30.963 [2024-07-15 22:04:25.175402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.175420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.184314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4f40 00:26:30.963 [2024-07-15 22:04:25.185062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.185080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.192897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e9e10 00:26:30.963 [2024-07-15 22:04:25.194136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.194154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.963 [2024-07-15 22:04:25.200843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fdeb0 00:26:30.963 [2024-07-15 22:04:25.201486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.963 [2024-07-15 22:04:25.201505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.210539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0bc0 00:26:31.223 [2024-07-15 22:04:25.211286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.211304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.220080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:31.223 [2024-07-15 22:04:25.220969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.220988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.229603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f8618 00:26:31.223 [2024-07-15 22:04:25.230597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.239234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190eaab8 00:26:31.223 [2024-07-15 22:04:25.240364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.240381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.247335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f0bc0 00:26:31.223 [2024-07-15 22:04:25.247900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.247918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.257201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9b30 00:26:31.223 [2024-07-15 22:04:25.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.258161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.266168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ec840 00:26:31.223 [2024-07-15 22:04:25.267099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.267118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.275321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3498 00:26:31.223 [2024-07-15 22:04:25.276267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.284415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f9b30 00:26:31.223 [2024-07-15 22:04:25.285338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.285355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.293526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ec840 00:26:31.223 [2024-07-15 22:04:25.294475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.294493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.302637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e3498 00:26:31.223 [2024-07-15 22:04:25.303588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.303606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.311202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5658 00:26:31.223 [2024-07-15 22:04:25.312128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.312146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.320507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e7c50 00:26:31.223 [2024-07-15 22:04:25.321434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.321452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.329864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e5220 00:26:31.223 [2024-07-15 22:04:25.330804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.330822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.338892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fc128 00:26:31.223 [2024-07-15 22:04:25.339515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.223 [2024-07-15 22:04:25.339538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.223 [2024-07-15 22:04:25.349261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f4f40 00:26:31.224 [2024-07-15 22:04:25.350573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.350592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.356550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e1710 00:26:31.224 [2024-07-15 22:04:25.357069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.357088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.365796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190df550 00:26:31.224 [2024-07-15 22:04:25.366628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.366646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.375710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e12d8 00:26:31.224 [2024-07-15 22:04:25.376876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.383732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fc560 00:26:31.224 [2024-07-15 22:04:25.384288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.393423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e8d30 00:26:31.224 [2024-07-15 22:04:25.394032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.394051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.402913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190ee190 00:26:31.224 [2024-07-15 22:04:25.403908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.403928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.412150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190f3a28 00:26:31.224 [2024-07-15 22:04:25.413152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.413171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.421453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190fa7d8 00:26:31.224 [2024-07-15 22:04:25.422408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.422427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.430552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e27f0 00:26:31.224 [2024-07-15 22:04:25.431509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.431527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 [2024-07-15 22:04:25.439590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8974c0) with pdu=0x2000190e01f8 00:26:31.224 [2024-07-15 22:04:25.440504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.224 [2024-07-15 22:04:25.440523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.224 00:26:31.224 Latency(us) 00:26:31.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:31.224 nvme0n1 : 2.00 28027.73 109.48 0.00 0.00 4560.52 1951.83 10827.69 00:26:31.224 =================================================================================================================== 00:26:31.224 Total : 28027.73 109.48 0.00 0.00 4560.52 1951.83 10827.69 00:26:31.224 0 00:26:31.224 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.483 | .driver_specific 00:26:31.483 | .nvme_error 00:26:31.483 | .status_code 00:26:31.483 | .command_transient_transport_error' 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3841234 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3841234 ']' 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3841234 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3841234 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3841234' 00:26:31.483 killing process with pid 3841234 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3841234 00:26:31.483 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.483 00:26:31.483 Latency(us) 00:26:31.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.483 =================================================================================================================== 00:26:31.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.483 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3841234 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3841924 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3841924 /var/tmp/bperf.sock 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3841924 ']' 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.742 22:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.742 [2024-07-15 22:04:25.920945] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:31.742 [2024-07-15 22:04:25.920996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841924 ] 00:26:31.742 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.742 Zero copy mechanism will not be used. 00:26:31.742 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.742 [2024-07-15 22:04:25.975336] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.001 [2024-07-15 22:04:26.054812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.568 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.568 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:32.568 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.568 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.827 22:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.086 nvme0n1 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.086 22:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.344 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.344 Zero copy mechanism will not be used. 00:26:33.344 Running I/O for 2 seconds... 00:26:33.345 [2024-07-15 22:04:27.376422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.376901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.376931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.386703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.387105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.387130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.395176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.395577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.395599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.402724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.403100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.403121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.408815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.409177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.409197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.415634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.415861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.415881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.422590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.422958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.422983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.428461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.428804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.428825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.434699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.435054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.435074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.441468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.441845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.447736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.448088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.448108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.453769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.454110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.454129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.460075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.460435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.460454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.466659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.467034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.472949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.473336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.473355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.480199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.480612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.480631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.486146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.486521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.486541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.492457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.492847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.492866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.498904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.499297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.510876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.511367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.511386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.519089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.519449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.519467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.525848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.526187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.526207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.532598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.532952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.532971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.539011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.539369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.544819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.545145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.545165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.550370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.550659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.554735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.555002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.555021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.558668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.558886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.345 [2024-07-15 22:04:27.558905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.345 [2024-07-15 22:04:27.562412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.345 [2024-07-15 22:04:27.562625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.562644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.346 [2024-07-15 22:04:27.567097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.346 [2024-07-15 22:04:27.567348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.567367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.346 [2024-07-15 22:04:27.570893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.346 [2024-07-15 22:04:27.571125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.571143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.346 [2024-07-15 22:04:27.575121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.346 [2024-07-15 22:04:27.575330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.575349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.346 [2024-07-15 22:04:27.579233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.346 [2024-07-15 22:04:27.579463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.579485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.346 [2024-07-15 22:04:27.583620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.346 [2024-07-15 22:04:27.583841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.346 [2024-07-15 22:04:27.583860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.587401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.587636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.587655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.591138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.591394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.591414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.595645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.595864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.595883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.599301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.599533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.599552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.603013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.603326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.603346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.606826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.607075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.610504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.610720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.610739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.614469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.614689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.614708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.618756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.618975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.618993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.622708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.622918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.622938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.626885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.627107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.627125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.630604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.630854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.630873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.635116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.635445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.635464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.640942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.641193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.641212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.645921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.646183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.646201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.650489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.650735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.650754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.655401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.655691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.655710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.605 [2024-07-15 22:04:27.660767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.605 [2024-07-15 22:04:27.661010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.605 [2024-07-15 22:04:27.661029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.665456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.665724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.670618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.670885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.675519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.675760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.675779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.680820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.681087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.681106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.685852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.686128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.686147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.690389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.690609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.690628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.694548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.694777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.694799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.700464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.700802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.700821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.706602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.706885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.706903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.712536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.712877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.712896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.718506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.718840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.718858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.726427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.726700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.726718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.732155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.732450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.732469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.739333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.739636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.739655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.749922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.750370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.750391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.759342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.759647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.759666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.766210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.766476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.766495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.770895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.771123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.775187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.775405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.775424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.779075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.779304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.779322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.783024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.783247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.786878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.787091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.787110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.791223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.791442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.791461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.795939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.796153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.796172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.800095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.800337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.805596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.805826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.805844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.810370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.810572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.810592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.815130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.815347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.815366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.606 [2024-07-15 22:04:27.819517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.606 [2024-07-15 22:04:27.819731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.606 [2024-07-15 22:04:27.819749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.607 [2024-07-15 22:04:27.823973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.607 [2024-07-15 22:04:27.824204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.607 [2024-07-15 22:04:27.824223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.607 [2024-07-15 22:04:27.828938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.607 [2024-07-15 22:04:27.829151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.607 [2024-07-15 22:04:27.829169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.607 [2024-07-15 22:04:27.834795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.607 [2024-07-15 22:04:27.835122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.607 [2024-07-15 22:04:27.835141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.607 [2024-07-15 22:04:27.840326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.607 [2024-07-15 22:04:27.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.607 [2024-07-15 22:04:27.840564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.607 [2024-07-15 22:04:27.844870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.607 [2024-07-15 22:04:27.845090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.607 [2024-07-15 22:04:27.845110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.849143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.849381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.849401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.853548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.853775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.853794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.857794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.858017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.858036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.861689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.861909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.865553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.865770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.865788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.869412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.869652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.873948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.874182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.874201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.877924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.878154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.878172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.881815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.882029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.882048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.885692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.885917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.885936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.890250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.890507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.890526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.894119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.894347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.894366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.897958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.898177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.898196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.901744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.901967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.906470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.906685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.906704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.867 [2024-07-15 22:04:27.910375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.867 [2024-07-15 22:04:27.910594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.867 [2024-07-15 22:04:27.910614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.914150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.914361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.914379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.918895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.919115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.923171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.923408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.923427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.926952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.927162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.927182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.931376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.931619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.931639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.935655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.935871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.935890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.940415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.940669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.944208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.944448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.944468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.948371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.948649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.948670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.953563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.953887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.953907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.959585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.959835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.959855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.965395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.965711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.965730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.972788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.980487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.980884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.980902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:27.990009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:27.990442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:27.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.000163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.000493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.000512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.008673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.008973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.008992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.016546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.016754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.025439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.025704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.025724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.033482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.033751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.033770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.041235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.041564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.041582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.048387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.048644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.048663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.055626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.056010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.056029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.062886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.063162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.063181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.068633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.068907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.068925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.075396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.075756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.075774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.083433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.083685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.083703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.089942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.090248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.090267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.096881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.868 [2024-07-15 22:04:28.097187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.868 [2024-07-15 22:04:28.097206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.868 [2024-07-15 22:04:28.104002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:33.869 [2024-07-15 22:04:28.104349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.869 [2024-07-15 22:04:28.104369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.110353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.110573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.110592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.116528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.116778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.121330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.121557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.126271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.126493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.126512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.131302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.131516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.131539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.135721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.135936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.135956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.139993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.140238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.143979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.144193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.144213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.147888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.148105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.148125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.151773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.151985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.152004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.155709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.155926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.155946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.160302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.160530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.160549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.165461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.165682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.165700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.170017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.170244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.170263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.174448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.174701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.178650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.178862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.178882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.182880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.183099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.183118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.187388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.187603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.191655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.191877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.191897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.195691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.166 [2024-07-15 22:04:28.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.166 [2024-07-15 22:04:28.195927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.166 [2024-07-15 22:04:28.199864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.200007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.200024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.205585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.205821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.205843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.210279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.210497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.210516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.214577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.214807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.214826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.219043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.219263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.223251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.223466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.223485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.227440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.227654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.227673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.231577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.231805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.235971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.236204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.236223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.240161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.244624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.244845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.244864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.248838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.249056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.252911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.253128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.253147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.257419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.257638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.257659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.261379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.261603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.261622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.265314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.265546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.265566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.269257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.269497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.269517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.273156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.273382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.273401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.277057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.277279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.277296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.280980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.281200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.281217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.284836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.285055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.285075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.288989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.289206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.289231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.293891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.294119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.294139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.298766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.299013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.299032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.303029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.303250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.303270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.307300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.311766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.167 [2024-07-15 22:04:28.311992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.167 [2024-07-15 22:04:28.312012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.167 [2024-07-15 22:04:28.316004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.316233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.316258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.320190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.320415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.320435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.324487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.324709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.324729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.328719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.328952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.328971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.333153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.333403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.337588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.337805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.337824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.341869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.342094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.342114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.346297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.346529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.346549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.350725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.350934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.350952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.355133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.355360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.359242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.359459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.359478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.363919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.364141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.368812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.369030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.369048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.373258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.373486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.373504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.377850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.378072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.382386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.382607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.386797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.387038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.387057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.390782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.391007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.391027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.394758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.394974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.394993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.398701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.398927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.398946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.168 [2024-07-15 22:04:28.402670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.168 [2024-07-15 22:04:28.402896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.168 [2024-07-15 22:04:28.402916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.406622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.406846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.406866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.410575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.410812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.410831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.414580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.414793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.414812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.418560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.418798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.418817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.422767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.422988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.423009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.428234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.428638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.428661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.433763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.434029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.434048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.440110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.440374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.440394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.446074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.446354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.453018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.453246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.453266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.460080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.460371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.460390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.467700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.467988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.468007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.475117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.475428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.475446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.482632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.428 [2024-07-15 22:04:28.482893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.428 [2024-07-15 22:04:28.482912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.428 [2024-07-15 22:04:28.490525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.490889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.498104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.498401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.498420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.505858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.506206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.506231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.513973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.514320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.514339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.521402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.521733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.521752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.529326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.529611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.529629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.536953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.537208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.543119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.543428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.549536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.549870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.549889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.556386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.556681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.556700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.562851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.563162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.563181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.569310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.569678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.569697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.576005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.576338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.576358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.583237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.583507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.583526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.589037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.589318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.589337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.593336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.593554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.593573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.597695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.597941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.597960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.603608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.603931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.603952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.609815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.610052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.610072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.616789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.617049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.617068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.622239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.622499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.622517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.628103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.628373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.628392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.632592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.632804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.632824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.636634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.636876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.640669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.640892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.644871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.645085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.645104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.648811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.649036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.649056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.652755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.429 [2024-07-15 22:04:28.652965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.429 [2024-07-15 22:04:28.652984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.429 [2024-07-15 22:04:28.658996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.430 [2024-07-15 22:04:28.659516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.430 [2024-07-15 22:04:28.659535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.669053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.669433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.669453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.676396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.676637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.676656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.680788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.681014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.681034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.685005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.685232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.685251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.689071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.689292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.689311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.692989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.693230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.693249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.697023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.697305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.697324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.700975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.701210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.701234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.705626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.705927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.709813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.710040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.710060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.713723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.713973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.717619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.717834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.689 [2024-07-15 22:04:28.717853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.689 [2024-07-15 22:04:28.721496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.689 [2024-07-15 22:04:28.721729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.721748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.725576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.725800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.725819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.729776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.729995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.730021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.734307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.734521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.734541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.738199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.738437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.742318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.742538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.742558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.746221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.746440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.746460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.750135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.750418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.750436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.754062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.754211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.758751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.759095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.764813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.765028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.765047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.770516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.770751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.774678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.774913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.778817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.779036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.779055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.782962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.783178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.783197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.786991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.787230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.787248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.790928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.791146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.791165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.795350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.795579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.799555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.799808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.690 [2024-07-15 22:04:28.805193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.690 [2024-07-15 22:04:28.805437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.690 [2024-07-15 22:04:28.805455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.809851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.810081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.814086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.814318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.814336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.818317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.818540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.818559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.822566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.822796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.822815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.827941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.828155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.832453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.832672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.832691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.837138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.837374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.837393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.842335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.842553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.842572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.846364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.846581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.846604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.850412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.850636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.850654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.854692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.854915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.858989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.859221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.859244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.864185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.864415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.864434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.869403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.869618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.869637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.874720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.874937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.874956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.879038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.879262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.879279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.883436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.883669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.883687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.887781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.888003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.888022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.892019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.892239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.691 [2024-07-15 22:04:28.892261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.691 [2024-07-15 22:04:28.896699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.691 [2024-07-15 22:04:28.896770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.692 [2024-07-15 22:04:28.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.692 [2024-07-15 22:04:28.904390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.692 [2024-07-15 22:04:28.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.692 [2024-07-15 22:04:28.904746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.692 [2024-07-15 22:04:28.914370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.692 [2024-07-15 22:04:28.914558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.692 [2024-07-15 22:04:28.914575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.692 [2024-07-15 22:04:28.920874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.692 [2024-07-15 22:04:28.921043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.692 [2024-07-15 22:04:28.921061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.692 [2024-07-15 22:04:28.926270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.692 [2024-07-15 22:04:28.926359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.692 [2024-07-15 22:04:28.926378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.931135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.931205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.931223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.935555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.935620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.935637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.939760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.939843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.944143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.944277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.944295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.949468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.949531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.949549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.954785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.954852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.954869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.959304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.959437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.959455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.963841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.963908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.963925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.967866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.967951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.967969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.971806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.971871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.971889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.975992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.976078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.979913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.979976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.979994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.983859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.983940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.983958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.987947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.988169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.988189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.992555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.951 [2024-07-15 22:04:28.992692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.951 [2024-07-15 22:04:28.992710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.951 [2024-07-15 22:04:28.998934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:28.999093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:28.999112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.005067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.005253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.005271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.009900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.020568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.020806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.020825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.027391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.027500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.027519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.036474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.036710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.036730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.043420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.043548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.049958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.050077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.050095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.054829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.054899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.054915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.059317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.059382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.059400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.063689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.063774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.063792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.067653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.067728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.067746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.071983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.072049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.072070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.076857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.076920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.081136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.081206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.081223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.085058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.085133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.085150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.088981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.089045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.089062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.094540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.094797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.094816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.105470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.105648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.105666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.112515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.112611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.112629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.118383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.118507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.118525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.124189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.124319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.124337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.129791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.129875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.129893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.134678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.134756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.134773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.139967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.140054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.140072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.145497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.145565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.145584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.150948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.151022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.151040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.156829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.156903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.156920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.163278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.952 [2024-07-15 22:04:29.163401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.952 [2024-07-15 22:04:29.163419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.952 [2024-07-15 22:04:29.175230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.953 [2024-07-15 22:04:29.175347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.953 [2024-07-15 22:04:29.175365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.953 [2024-07-15 22:04:29.182907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.953 [2024-07-15 22:04:29.183034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.953 [2024-07-15 22:04:29.183052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.953 [2024-07-15 22:04:29.187990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:34.953 [2024-07-15 22:04:29.188084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.953 [2024-07-15 22:04:29.188102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.192239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.192310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.192328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.196310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.196418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.200427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.200491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.205331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.205409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.205427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.209448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.209511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.209529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.213469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.213541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.213558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.217693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.217766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.217787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.223071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.223150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.223168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.228321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.228395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.228412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.232662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.213 [2024-07-15 22:04:29.232728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.213 [2024-07-15 22:04:29.232746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.213 [2024-07-15 22:04:29.236902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.236968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.236985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.241166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.241236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.245203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.245301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.245319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.249725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.249791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.249808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.253705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.253800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.253817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.257601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.257680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.261538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.261603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.261621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.265489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.265571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.265589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.269341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.269426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.269444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.273257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.273322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.273340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.277133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.277200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.277218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.281022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.281087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.281105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.285263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.285342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.285360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.289117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.289183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.289200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.292977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.293081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.293099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.297220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.297317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.297334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.302492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.302554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.302571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.307453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.307517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.307535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.311748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.311836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.311854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.316455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.316520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.316538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.320889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.320972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.320989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.325327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.325423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.325440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.330015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.330079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.334900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.334964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.334982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.339993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.340105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.340122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.345561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.345644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.350729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.350794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.350811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.355126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.355216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.359133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.359205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.359223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.214 [2024-07-15 22:04:29.363139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x897800) with pdu=0x2000190fef90 00:26:35.214 [2024-07-15 22:04:29.363219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.214 [2024-07-15 22:04:29.363241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.214 00:26:35.214 Latency(us) 00:26:35.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.214 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:35.215 nvme0n1 : 2.00 5986.06 748.26 0.00 0.00 2668.87 1695.39 11967.44 00:26:35.215 =================================================================================================================== 00:26:35.215 Total : 5986.06 748.26 0.00 0.00 2668.87 1695.39 11967.44 00:26:35.215 0 00:26:35.215 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.215 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.215 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.215 | .driver_specific 00:26:35.215 | .nvme_error 00:26:35.215 | .status_code 00:26:35.215 | .command_transient_transport_error' 00:26:35.215 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 386 > 0 )) 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3841924 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3841924 ']' 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3841924 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.474 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3841924 00:26:35.475 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:35.475 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:35.475 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3841924' 00:26:35.475 killing process with pid 3841924 00:26:35.475 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3841924 00:26:35.475 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.475 00:26:35.475 Latency(us) 00:26:35.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.475 =================================================================================================================== 00:26:35.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.475 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3841924 00:26:35.734 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3839804 00:26:35.734 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3839804 ']' 00:26:35.734 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3839804 00:26:35.734 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:35.734 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3839804 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3839804' 00:26:35.735 killing process with pid 3839804 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3839804 00:26:35.735 22:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3839804 00:26:35.994 00:26:35.994 real 0m16.872s 00:26:35.994 user 0m32.263s 00:26:35.994 sys 0m4.464s 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.994 ************************************ 00:26:35.994 END TEST nvmf_digest_error 00:26:35.994 ************************************ 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.994 rmmod nvme_tcp 00:26:35.994 rmmod nvme_fabrics 00:26:35.994 rmmod nvme_keyring 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3839804 ']' 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3839804 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3839804 ']' 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3839804 00:26:35.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3839804) - No such process 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3839804 is not found' 00:26:35.994 Process with pid 3839804 is not found 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.994 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.995 22:04:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.995 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.995 22:04:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.529 22:04:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.529 00:26:38.529 real 0m41.313s 00:26:38.529 user 1m6.647s 00:26:38.529 sys 0m12.595s 00:26:38.529 22:04:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.529 22:04:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.529 ************************************ 00:26:38.529 END TEST nvmf_digest 00:26:38.529 ************************************ 00:26:38.529 22:04:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:38.529 22:04:32 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:38.529 22:04:32 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:38.529 22:04:32 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:38.529 22:04:32 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:38.529 22:04:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:38.529 22:04:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.529 22:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.529 ************************************ 00:26:38.529 START TEST nvmf_bdevperf 00:26:38.529 ************************************ 00:26:38.529 22:04:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:38.529 * Looking for test storage... 00:26:38.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.529 22:04:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.529 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:38.529 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.529 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.530 22:04:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.714 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:42.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:42.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:42.715 Found net devices under 0000:86:00.0: cvl_0_0 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:42.715 Found net devices under 0000:86:00.1: cvl_0_1 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.715 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.974 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:42.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:42.974 00:26:42.974 --- 10.0.0.2 ping statistics --- 00:26:42.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.974 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:42.974 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:26:42.974 00:26:42.974 --- 10.0.0.1 ping statistics --- 00:26:42.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.974 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:26:42.974 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.974 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:42.974 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.975 22:04:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3845824 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3845824 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3845824 ']' 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.975 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:42.975 [2024-07-15 22:04:37.051072] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:42.975 [2024-07-15 22:04:37.051115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.975 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.975 [2024-07-15 22:04:37.108009] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:42.975 [2024-07-15 22:04:37.188405] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.975 [2024-07-15 22:04:37.188440] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.975 [2024-07-15 22:04:37.188447] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.975 [2024-07-15 22:04:37.188453] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.975 [2024-07-15 22:04:37.188458] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.975 [2024-07-15 22:04:37.188495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.975 [2024-07-15 22:04:37.188580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.975 [2024-07-15 22:04:37.188581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 [2024-07-15 22:04:37.900975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 Malloc0 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.925 [2024-07-15 22:04:37.958736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:43.925 { 00:26:43.925 "params": { 00:26:43.925 "name": "Nvme$subsystem", 00:26:43.925 "trtype": "$TEST_TRANSPORT", 00:26:43.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.925 "adrfam": "ipv4", 00:26:43.925 "trsvcid": "$NVMF_PORT", 00:26:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.925 "hdgst": ${hdgst:-false}, 00:26:43.925 "ddgst": ${ddgst:-false} 00:26:43.925 }, 00:26:43.925 "method": "bdev_nvme_attach_controller" 00:26:43.925 } 00:26:43.925 EOF 00:26:43.925 )") 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:43.925 22:04:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:43.925 "params": { 00:26:43.925 "name": "Nvme1", 00:26:43.925 "trtype": "tcp", 00:26:43.925 "traddr": "10.0.0.2", 00:26:43.925 "adrfam": "ipv4", 00:26:43.925 "trsvcid": "4420", 00:26:43.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.926 "hdgst": false, 00:26:43.926 "ddgst": false 00:26:43.926 }, 00:26:43.926 "method": "bdev_nvme_attach_controller" 00:26:43.926 }' 00:26:43.926 [2024-07-15 22:04:38.006072] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:43.926 [2024-07-15 22:04:38.006120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845958 ] 00:26:43.926 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.926 [2024-07-15 22:04:38.060942] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.926 [2024-07-15 22:04:38.134253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.184 Running I/O for 1 seconds... 00:26:45.119 00:26:45.119 Latency(us) 00:26:45.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.119 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.119 Verification LBA range: start 0x0 length 0x4000 00:26:45.119 Nvme1n1 : 1.01 10861.84 42.43 0.00 0.00 11739.69 2322.25 17780.20 00:26:45.119 =================================================================================================================== 00:26:45.119 Total : 10861.84 42.43 0.00 0.00 11739.69 2322.25 17780.20 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3846191 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.378 { 00:26:45.378 "params": { 00:26:45.378 "name": "Nvme$subsystem", 00:26:45.378 "trtype": "$TEST_TRANSPORT", 00:26:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.378 "adrfam": "ipv4", 00:26:45.378 "trsvcid": "$NVMF_PORT", 00:26:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.378 "hdgst": ${hdgst:-false}, 00:26:45.378 "ddgst": ${ddgst:-false} 00:26:45.378 }, 00:26:45.378 "method": "bdev_nvme_attach_controller" 00:26:45.378 } 00:26:45.378 EOF 00:26:45.378 )") 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:45.378 22:04:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.378 "params": { 00:26:45.379 "name": "Nvme1", 00:26:45.379 "trtype": "tcp", 00:26:45.379 "traddr": "10.0.0.2", 00:26:45.379 "adrfam": "ipv4", 00:26:45.379 "trsvcid": "4420", 00:26:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.379 "hdgst": false, 00:26:45.379 "ddgst": false 00:26:45.379 }, 00:26:45.379 "method": "bdev_nvme_attach_controller" 00:26:45.379 }' 00:26:45.379 [2024-07-15 22:04:39.568671] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:45.379 [2024-07-15 22:04:39.568720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846191 ] 00:26:45.379 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.637 [2024-07-15 22:04:39.624085] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.637 [2024-07-15 22:04:39.693677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.896 Running I/O for 15 seconds... 00:26:48.431 22:04:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3845824 00:26:48.431 22:04:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:48.431 [2024-07-15 22:04:42.539473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.431 [2024-07-15 22:04:42.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.431 [2024-07-15 22:04:42.539670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.539985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.539994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.432 [2024-07-15 22:04:42.540158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.432 [2024-07-15 22:04:42.540165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.433 [2024-07-15 22:04:42.540645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.433 [2024-07-15 22:04:42.540768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.433 [2024-07-15 22:04:42.540776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.540988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.540997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.434 [2024-07-15 22:04:42.541259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.434 [2024-07-15 22:04:42.541267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.435 [2024-07-15 22:04:42.541596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.435 [2024-07-15 22:04:42.541612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1188200 is same with the state(5) to be set 00:26:48.435 [2024-07-15 22:04:42.541628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.435 [2024-07-15 22:04:42.541633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.435 [2024-07-15 22:04:42.541639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:26:48.435 [2024-07-15 22:04:42.541647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541690] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1188200 was disconnected and freed. reset controller. 00:26:48.435 [2024-07-15 22:04:42.541735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.435 [2024-07-15 22:04:42.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.435 [2024-07-15 22:04:42.541760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.435 [2024-07-15 22:04:42.541774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.435 [2024-07-15 22:04:42.541789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.435 [2024-07-15 22:04:42.541796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.435 [2024-07-15 22:04:42.544638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.435 [2024-07-15 22:04:42.544664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.435 [2024-07-15 22:04:42.545334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.435 [2024-07-15 22:04:42.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.435 [2024-07-15 22:04:42.545404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.435 [2024-07-15 22:04:42.545988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.546376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.546385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.546392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.549235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.557917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.558325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.558373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.558396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.558979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.559187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.559197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.559204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.561999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.570873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.571343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.571362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.571370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.571544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.571718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.571727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.571741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.574398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.583849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.584332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.584378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.584402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.584982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.585416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.585426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.585432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.588098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.596784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.597280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.597299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.597306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.597479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.597652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.597662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.597670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.600365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.609644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.610104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.610147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.610171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.610764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.611355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.611392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.611399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.614032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.622471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.622925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.622968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.622992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.623544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.623709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.623719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.623725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.626422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.635409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.635776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.635793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.635800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.635962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.636126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.636135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.636141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.638841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.436 [2024-07-15 22:04:42.648415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.436 [2024-07-15 22:04:42.648826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.436 [2024-07-15 22:04:42.648868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.436 [2024-07-15 22:04:42.648890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.436 [2024-07-15 22:04:42.649305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.436 [2024-07-15 22:04:42.649478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.436 [2024-07-15 22:04:42.649488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.436 [2024-07-15 22:04:42.649494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.436 [2024-07-15 22:04:42.652157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.437 [2024-07-15 22:04:42.661474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.437 [2024-07-15 22:04:42.661864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.437 [2024-07-15 22:04:42.661908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.437 [2024-07-15 22:04:42.661931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.437 [2024-07-15 22:04:42.662388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.437 [2024-07-15 22:04:42.662557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.437 [2024-07-15 22:04:42.662568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.437 [2024-07-15 22:04:42.662574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.437 [2024-07-15 22:04:42.665316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.697 [2024-07-15 22:04:42.674405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.697 [2024-07-15 22:04:42.674748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.697 [2024-07-15 22:04:42.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.697 [2024-07-15 22:04:42.674773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.697 [2024-07-15 22:04:42.674936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.697 [2024-07-15 22:04:42.675100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.697 [2024-07-15 22:04:42.675109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.697 [2024-07-15 22:04:42.675115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.697 [2024-07-15 22:04:42.677828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.697 [2024-07-15 22:04:42.687425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.697 [2024-07-15 22:04:42.687830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.697 [2024-07-15 22:04:42.687848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.697 [2024-07-15 22:04:42.687855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.697 [2024-07-15 22:04:42.688018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.697 [2024-07-15 22:04:42.688181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.697 [2024-07-15 22:04:42.688190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.697 [2024-07-15 22:04:42.688196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.697 [2024-07-15 22:04:42.690896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.697 [2024-07-15 22:04:42.700413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.700804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.700821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.700828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.700992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.701155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.701165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.701171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.703867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.713284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.713764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.713806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.713829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.714423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.714893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.714903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.714910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.717605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.726190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.726592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.726610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.726620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.726786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.726950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.726959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.726965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.729719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.739109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.739572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.739590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.739597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.739770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.739944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.739954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.739960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.742603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.752238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.752641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.752670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.752681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.752855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.753029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.753038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.753044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.755875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.765409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.765796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.765814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.765821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.765999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.766177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.766187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.766193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.769025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.778561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.779013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.779030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.779038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.779215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.779399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.779410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.779417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.782252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.791624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.792138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.792182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.792205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.792792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.793171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.793183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.793190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.796066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.804855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.805349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.805367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.805374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.805553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.805731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.805739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.805745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.808583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.818113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.818585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.818603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.818611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.818787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.818964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.818974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.818981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.698 [2024-07-15 22:04:42.821899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.698 [2024-07-15 22:04:42.831423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.698 [2024-07-15 22:04:42.831924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.698 [2024-07-15 22:04:42.831943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.698 [2024-07-15 22:04:42.831952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.698 [2024-07-15 22:04:42.832147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.698 [2024-07-15 22:04:42.832355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.698 [2024-07-15 22:04:42.832366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.698 [2024-07-15 22:04:42.832373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.835375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.844664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.845153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.845170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.845178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.845366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.845551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.845561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.845567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.848483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.857969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.858455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.858473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.858480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.858663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.858845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.858854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.858861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.861877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.871220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.871681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.871699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.871706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.871888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.872071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.872081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.872088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.875114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.884356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.884841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.884858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.884865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.885051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.885240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.885249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.885257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.888175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.897602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.898085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.898102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.898109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.898292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.898470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.898479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.898486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.901316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.910683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.911169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.911212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.911250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.911831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.912032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.912042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.912049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.914882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.923783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.924248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.924296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.924320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.699 [2024-07-15 22:04:42.924902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.699 [2024-07-15 22:04:42.925358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.699 [2024-07-15 22:04:42.925369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.699 [2024-07-15 22:04:42.925379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.699 [2024-07-15 22:04:42.928045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.699 [2024-07-15 22:04:42.936899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.699 [2024-07-15 22:04:42.937357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.699 [2024-07-15 22:04:42.937416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.699 [2024-07-15 22:04:42.937442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:42.938025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:42.938457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:42.938469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:42.938475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:42.941143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:42.949805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:42.950211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:42.950233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:42.950241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:42.950405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:42.950568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:42.950577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:42.950583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:42.953272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:42.962676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:42.963151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:42.963196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:42.963217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:42.963812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:42.964231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:42.964241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:42.964247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:42.966839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:42.975572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:42.976031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:42.976083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:42.976106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:42.976704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:42.977297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:42.977323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:42.977330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:42.979945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:42.988459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:42.988926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:42.988943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:42.988949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:42.989111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:42.989298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:42.989307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:42.989314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:42.991984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.001359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.001815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.001859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.001881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.002397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:43.002563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:43.002572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:43.002578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:43.005172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.014199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.014664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.014682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.014689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.014861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:43.015037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:43.015046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:43.015053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:43.017742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.027019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.027495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.027540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.027562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.028140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:43.028355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:43.028365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:43.028371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:43.031038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.039953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.040430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.040473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.040495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.041074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:43.041329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:43.041339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:43.041346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:43.044009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.053174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.053565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.053583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.053590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.053768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.960 [2024-07-15 22:04:43.053946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.960 [2024-07-15 22:04:43.053968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.960 [2024-07-15 22:04:43.053975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.960 [2024-07-15 22:04:43.056723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.960 [2024-07-15 22:04:43.066099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.960 [2024-07-15 22:04:43.066570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.960 [2024-07-15 22:04:43.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.960 [2024-07-15 22:04:43.066636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.960 [2024-07-15 22:04:43.067215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.067634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.067644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.067651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.071480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.079799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.080287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.080333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.080355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.080802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.080972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.080981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.080988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.083722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.092633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.093076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.093092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.093099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.093286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.093460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.093469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.093475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.096138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.105633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.106095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.106112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.106122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.106311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.106485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.106495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.106501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.109160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.118544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.118939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.118955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.118962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.119125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.119313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.119323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.119330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.122004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.131439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.131917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.131960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.131981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.132473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.132648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.132657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.132664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.135317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.144239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.144737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.144780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.144801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.145378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.145554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.145566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.145573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.148227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.157073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.157546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.157589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.157610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.158190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.158483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.158497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.158506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.162565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.170470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.170934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.170977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.170998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.171504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.171678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.171687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.171694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.174396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.183304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.183766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.183782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.183789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.183953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.184117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.184127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.184133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.186820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.961 [2024-07-15 22:04:43.196237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.961 [2024-07-15 22:04:43.196754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.961 [2024-07-15 22:04:43.196771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:48.961 [2024-07-15 22:04:43.196778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:48.961 [2024-07-15 22:04:43.196942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:48.961 [2024-07-15 22:04:43.197107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.961 [2024-07-15 22:04:43.197118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.961 [2024-07-15 22:04:43.197124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.961 [2024-07-15 22:04:43.199876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.221 [2024-07-15 22:04:43.209219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.221 [2024-07-15 22:04:43.209629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.221 [2024-07-15 22:04:43.209673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.221 [2024-07-15 22:04:43.209696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.221 [2024-07-15 22:04:43.210290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.221 [2024-07-15 22:04:43.210874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.221 [2024-07-15 22:04:43.210899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.221 [2024-07-15 22:04:43.210926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.221 [2024-07-15 22:04:43.213563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.221 [2024-07-15 22:04:43.222148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.221 [2024-07-15 22:04:43.222612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.221 [2024-07-15 22:04:43.222656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.221 [2024-07-15 22:04:43.222677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.221 [2024-07-15 22:04:43.223271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.221 [2024-07-15 22:04:43.223856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.221 [2024-07-15 22:04:43.223881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.223903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.226570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.235080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.235523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.235540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.235547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.235717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.235880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.235890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.235895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.238587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.247958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.248428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.248445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.248453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.248615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.248778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.248787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.248793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.251506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.260869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.261330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.261374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.261396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.261907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.262070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.262080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.262086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.264780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.273699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.274174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.274216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.274252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.274830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.275003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.275011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.275020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.277657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.286632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.287098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.287140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.287162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.287744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.287919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.287928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.287935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.290570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.299583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.299992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.300010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.300017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.300189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.300368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.300377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.300384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.303203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.312724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.313194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.313211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.313219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.313400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.313573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.313582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.313589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.316342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.325761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.326242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.326294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.326318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.326899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.327397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.327407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.327414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.330082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.338711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.222 [2024-07-15 22:04:43.339180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.222 [2024-07-15 22:04:43.339197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.222 [2024-07-15 22:04:43.339203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.222 [2024-07-15 22:04:43.339395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.222 [2024-07-15 22:04:43.339568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.222 [2024-07-15 22:04:43.339578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.222 [2024-07-15 22:04:43.339585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.222 [2024-07-15 22:04:43.342242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.222 [2024-07-15 22:04:43.351671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.352141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.352183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.352206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.352801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.353291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.353302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.353309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.355940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.364595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.365048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.365066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.365073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.365252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.365428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.365437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.365444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.368106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.377557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.377882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.377898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.377905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.378067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.378235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.378245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.378268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.380942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.390509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.390987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.391032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.391054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.391514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.391679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.391688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.391695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.394311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.403556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.404026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.404069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.404091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.404499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.404674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.404684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.404690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.407342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.416572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.417012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.417028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.417035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.417198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.417392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.417403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.417409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.420071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.429510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.429980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.430023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.430044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.430510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.430686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.430696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.430703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.433350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.442326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.442796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.442838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.442859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.443394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.443568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.443578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.443585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.446240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.223 [2024-07-15 22:04:43.455213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.223 [2024-07-15 22:04:43.455686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.223 [2024-07-15 22:04:43.455728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.223 [2024-07-15 22:04:43.455758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.223 [2024-07-15 22:04:43.456185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.223 [2024-07-15 22:04:43.456377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.223 [2024-07-15 22:04:43.456387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.223 [2024-07-15 22:04:43.456394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.223 [2024-07-15 22:04:43.459179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.482 [2024-07-15 22:04:43.468218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.482 [2024-07-15 22:04:43.468683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.482 [2024-07-15 22:04:43.468725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.482 [2024-07-15 22:04:43.468749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.482 [2024-07-15 22:04:43.469266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.482 [2024-07-15 22:04:43.469441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.482 [2024-07-15 22:04:43.469451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.482 [2024-07-15 22:04:43.469457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.482 [2024-07-15 22:04:43.472211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.482 [2024-07-15 22:04:43.481045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.482 [2024-07-15 22:04:43.481518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.482 [2024-07-15 22:04:43.481562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.482 [2024-07-15 22:04:43.481584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.482 [2024-07-15 22:04:43.482164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.482 [2024-07-15 22:04:43.482446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.482 [2024-07-15 22:04:43.482457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.482 [2024-07-15 22:04:43.482463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.482 [2024-07-15 22:04:43.486512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.482 [2024-07-15 22:04:43.494591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.482 [2024-07-15 22:04:43.495069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.482 [2024-07-15 22:04:43.495113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.482 [2024-07-15 22:04:43.495135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.482 [2024-07-15 22:04:43.495618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.495793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.495806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.495812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.498512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.507460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.507877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.507925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.507948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.508462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.508627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.508635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.508641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.511235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.520270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.520736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.520777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.520798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.521395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.521811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.521821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.521827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.524421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.533141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.533621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.533639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.533647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.533819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.533992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.534002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.534008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.536646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.546085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.546553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.546577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.546749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.546920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.546930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.546936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.549629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.559066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.559568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.559611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.559634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.560214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.560813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.560839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.560846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.563685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.572074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.572542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.572588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.572611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.573038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.573211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.573221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.573234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.575862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.585013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.585527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.585571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.585594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.585923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.586087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.586096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.586102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.588740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.597927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.598402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.598445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.598468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.598712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.598876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.598885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.598890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.601658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.610803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.611267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.483 [2024-07-15 22:04:43.611284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.483 [2024-07-15 22:04:43.611291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.483 [2024-07-15 22:04:43.611454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.483 [2024-07-15 22:04:43.611617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.483 [2024-07-15 22:04:43.611626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.483 [2024-07-15 22:04:43.611632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.483 [2024-07-15 22:04:43.614328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.483 [2024-07-15 22:04:43.623766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.483 [2024-07-15 22:04:43.624234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.624251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.624257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.624420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.624583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.624592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.624602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.627286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.636609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.637061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.637103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.637125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.637652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.637826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.637835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.637841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.640488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.649455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.649927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.649970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.649992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.650584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.651065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.651075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.651081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.653796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.662258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.662714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.662752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.662775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.663372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.663933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.663942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.663949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.666580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.675198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.675680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.675730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.675752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.676249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.676440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.676450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.676457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.679119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.688092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.688568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.688612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.688635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.689077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.689247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.689274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.689280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.691953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.700935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.701328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.701344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.701352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.701515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.701678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.701687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.701693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.704389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.484 [2024-07-15 22:04:43.713814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.484 [2024-07-15 22:04:43.714218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.484 [2024-07-15 22:04:43.714272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.484 [2024-07-15 22:04:43.714294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.484 [2024-07-15 22:04:43.714874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.484 [2024-07-15 22:04:43.715297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.484 [2024-07-15 22:04:43.715307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.484 [2024-07-15 22:04:43.715313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.484 [2024-07-15 22:04:43.719382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.727419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.727804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.727823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.727830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.728007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.728181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.728191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.728197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.730923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.740301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.740674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.740691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.740698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.740861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.741023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.741032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.741038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.743729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.753252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.753700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.753743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.753765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.754257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.754423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.754431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.754438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.757037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.766090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.766574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.766592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.766600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.766771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.766943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.766952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.766959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.769695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.779032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.779439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.779457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.779464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.779639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.779803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.779813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.779819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.782445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.792057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.792455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.792471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.792478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.792641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.792804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.792813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.792819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.795504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.804970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.805291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.805308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.805318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.805482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.805645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.805654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.805660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.808406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.744 [2024-07-15 22:04:43.817965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.744 [2024-07-15 22:04:43.818462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.744 [2024-07-15 22:04:43.818508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.744 [2024-07-15 22:04:43.818531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.744 [2024-07-15 22:04:43.819112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.744 [2024-07-15 22:04:43.819552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.744 [2024-07-15 22:04:43.819574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.744 [2024-07-15 22:04:43.819580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.744 [2024-07-15 22:04:43.822445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.830952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.831439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.831484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.831507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.832087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.832324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.832333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.832340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.834950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.843911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.844397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.844463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.845042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.845617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.845630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.845637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.848341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.856802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.857176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.857192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.857199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.857389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.857563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.857572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.857578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.860272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.869633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.870090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.870132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.870155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.870585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.870760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.870769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.870776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.873481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.882527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.882995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.883039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.883062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.883529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.883704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.883714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.883720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.886417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.895576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.896065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.896083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.896090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.896273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.896452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.896461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.896468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.899300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.908651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.909158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.909175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.909183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.909365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.909543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.909552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.909559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.912389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.921746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.922230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.922248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.922256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.922435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.922774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.922786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.922793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.925635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.934827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.935293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.935311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.935318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.935501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.935680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.935690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.935696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.938528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.947917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.948379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.948397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.948405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.948584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.948764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.948774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.745 [2024-07-15 22:04:43.948781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.745 [2024-07-15 22:04:43.951629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.745 [2024-07-15 22:04:43.960994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.745 [2024-07-15 22:04:43.961366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.745 [2024-07-15 22:04:43.961383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.745 [2024-07-15 22:04:43.961391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.745 [2024-07-15 22:04:43.961569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.745 [2024-07-15 22:04:43.961748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.745 [2024-07-15 22:04:43.961757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.746 [2024-07-15 22:04:43.961764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.746 [2024-07-15 22:04:43.964598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.746 [2024-07-15 22:04:43.974120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:49.746 [2024-07-15 22:04:43.974594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.746 [2024-07-15 22:04:43.974612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:49.746 [2024-07-15 22:04:43.974619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:49.746 [2024-07-15 22:04:43.974797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:49.746 [2024-07-15 22:04:43.974975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:49.746 [2024-07-15 22:04:43.974985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:49.746 [2024-07-15 22:04:43.974996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:49.746 [2024-07-15 22:04:43.977842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:43.987341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:43.987824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:43.987842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:43.987849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:43.988028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:43.988207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:43.988217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:43.988223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:43.991078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.000454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.000865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.000883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.000890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.001068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.001251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.001261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.001268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.004100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.013637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.014056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.014074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.014081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.014264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.014442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.014452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.014458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.017287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.026821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.027302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.027324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.027332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.027510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.027690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.027700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.027706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.030543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.040020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.040427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.040445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.040453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.040631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.040809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.040819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.040825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.043657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.053185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.053648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.053665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.053673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.053851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.054030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.054039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.054046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.056884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.066255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.066730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.066747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.066755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.066933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.067113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.067123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.067130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.069967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.079435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.079858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.079875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.079884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.080061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.080245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.080256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.080262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.083092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.092628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.093109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.093127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.005 [2024-07-15 22:04:44.093134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.005 [2024-07-15 22:04:44.093316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.005 [2024-07-15 22:04:44.093494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.005 [2024-07-15 22:04:44.093504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.005 [2024-07-15 22:04:44.093510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.005 [2024-07-15 22:04:44.096341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.005 [2024-07-15 22:04:44.105692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.005 [2024-07-15 22:04:44.106150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.005 [2024-07-15 22:04:44.106167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.106175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.106357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.106535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.106544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.106551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.109387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.118747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.119148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.119166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.119173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.119356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.119534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.119543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.119550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.122389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.131912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.132389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.132406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.132414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.132593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.132771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.132781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.132788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.135623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.144986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.145404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.145422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.145429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.145606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.145784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.145793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.145800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.148666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.158034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.158512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.158529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.158540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.158719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.158898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.158908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.158915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.161749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.171101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.171565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.171582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.171590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.171767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.171946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.171955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.171962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.174797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.184164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.184578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.184595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.184602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.184780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.184958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.184967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.184974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.187809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.197218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.197680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.197698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.197705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.197883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.198060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.198073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.198079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.200911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.210270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.210616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.210640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.210818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.210996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.211006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.211012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.213851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.223385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.223808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.223825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.223834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.224011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.224188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.224197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.224204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.227037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.006 [2024-07-15 22:04:44.236565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.006 [2024-07-15 22:04:44.236952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.006 [2024-07-15 22:04:44.236969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.006 [2024-07-15 22:04:44.236977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.006 [2024-07-15 22:04:44.237154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.006 [2024-07-15 22:04:44.237336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.006 [2024-07-15 22:04:44.237347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.006 [2024-07-15 22:04:44.237353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.006 [2024-07-15 22:04:44.240178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.249786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.250264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.250284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.250292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.250474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.250653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.250663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.250670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.253511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.262958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.263420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.263438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.263446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.263624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.263802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.263812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.263819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.266658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.276042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.276524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.276542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.276549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.276726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.276902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.276912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.276919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.279753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.289119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.289618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.289661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.289684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.290249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.290430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.290440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.290447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.293282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.302236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.302647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.302691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.302714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.303319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.303717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.303727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.303734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.306511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.315165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.315638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.315681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.315703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.316296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.316497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.316506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.316513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.319174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.328103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.328556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.328573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.328580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.328742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.328906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.328914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.328923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.331756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.341120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.341623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.341667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.341690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.342282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.342736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.342745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.342751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.345347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.353965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.354426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.354443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.354450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.354613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.354777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.354786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.354792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.357391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.366886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.367327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.367343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.367351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.367514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.367677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.367686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.367693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.370388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.379849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.380315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.267 [2024-07-15 22:04:44.380343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.267 [2024-07-15 22:04:44.380507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.267 [2024-07-15 22:04:44.380670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.267 [2024-07-15 22:04:44.380679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.267 [2024-07-15 22:04:44.380685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.267 [2024-07-15 22:04:44.383379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.267 [2024-07-15 22:04:44.392815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.267 [2024-07-15 22:04:44.393265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.267 [2024-07-15 22:04:44.393282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.393291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.393465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.393637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.393647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.393653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.396305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.405827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.406310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.406378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.406797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.406962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.406971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.406978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.409608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.418796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.419266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.419309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.419332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.419603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.419770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.419779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.419786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.422482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.431768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.432216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.432238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.432246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.432418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.432595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.432604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.432611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.435205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.444664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.445140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.445183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.445205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.445704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.445870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.445879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.445886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.448509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.457577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.458041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.458058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.458065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.458243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.458416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.458425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.458433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.461088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.470370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.470834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.470850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.470857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.471020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.471183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.471192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.471199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.473977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.483270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.483735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.483751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.483759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.483922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.484085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.484094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.484100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.486792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.268 [2024-07-15 22:04:44.496164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.268 [2024-07-15 22:04:44.496640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.268 [2024-07-15 22:04:44.496682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.268 [2024-07-15 22:04:44.496704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.268 [2024-07-15 22:04:44.497056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.268 [2024-07-15 22:04:44.497221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.268 [2024-07-15 22:04:44.497235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.268 [2024-07-15 22:04:44.497242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.268 [2024-07-15 22:04:44.499931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.509152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.509611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.509649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.509681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.510209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.510403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.510413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.510420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.528 [2024-07-15 22:04:44.513168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.521987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.522451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.522468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.522475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.522639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.522802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.522811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.522817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.528 [2024-07-15 22:04:44.525507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.534813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.535283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.535327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.535350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.535931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.536185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.536194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.536201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.528 [2024-07-15 22:04:44.538892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.547703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.548170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.548210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.548246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.548826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.549028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.549044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.549052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.528 [2024-07-15 22:04:44.551678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.560560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.560916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.560959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.560982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.561577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.562161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.562185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.562216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.528 [2024-07-15 22:04:44.564889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.528 [2024-07-15 22:04:44.573399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.528 [2024-07-15 22:04:44.573873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.528 [2024-07-15 22:04:44.573915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.528 [2024-07-15 22:04:44.573937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.528 [2024-07-15 22:04:44.574332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.528 [2024-07-15 22:04:44.574507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.528 [2024-07-15 22:04:44.574516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.528 [2024-07-15 22:04:44.574522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.577185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.586304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.586794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.586811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.586818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.586980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.587143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.587151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.587157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.590107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.599321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.599741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.599759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.599766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.599939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.600113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.600123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.600130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.602811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.612112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.612554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.612571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.612578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.612740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.612903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.612912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.612919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.615612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.625044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.625519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.625562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.625586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.626152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.626344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.626354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.626360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.629047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.637920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.638402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.638448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.638472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.639061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.639263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.639272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.639279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.642020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.650939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.651404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.651449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.651471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.652051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.652364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.652386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.652393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.655079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.663884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.664345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.664362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.664369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.664532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.664695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.664703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.664710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.667382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.676885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.677325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.677342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.677349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.677512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.677675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.677683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.677694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.680438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.689719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.690240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.690283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.690305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.690884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.691476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.691502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.691525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.694177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.702641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.703105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.703122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.703130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.703300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.703465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.529 [2024-07-15 22:04:44.703474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.529 [2024-07-15 22:04:44.703481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.529 [2024-07-15 22:04:44.706198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.529 [2024-07-15 22:04:44.715479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.529 [2024-07-15 22:04:44.715919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.529 [2024-07-15 22:04:44.715936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.529 [2024-07-15 22:04:44.715944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.529 [2024-07-15 22:04:44.716106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.529 [2024-07-15 22:04:44.716293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.530 [2024-07-15 22:04:44.716303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.530 [2024-07-15 22:04:44.716310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.530 [2024-07-15 22:04:44.719003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.530 [2024-07-15 22:04:44.728287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.530 [2024-07-15 22:04:44.728755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.530 [2024-07-15 22:04:44.728805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.530 [2024-07-15 22:04:44.728827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.530 [2024-07-15 22:04:44.729375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.530 [2024-07-15 22:04:44.729540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.530 [2024-07-15 22:04:44.729549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.530 [2024-07-15 22:04:44.729555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.530 [2024-07-15 22:04:44.732150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.530 [2024-07-15 22:04:44.741216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.530 [2024-07-15 22:04:44.741637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.530 [2024-07-15 22:04:44.741680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.530 [2024-07-15 22:04:44.741702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.530 [2024-07-15 22:04:44.742293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.530 [2024-07-15 22:04:44.742756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.530 [2024-07-15 22:04:44.742767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.530 [2024-07-15 22:04:44.742773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.530 [2024-07-15 22:04:44.745415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.530 [2024-07-15 22:04:44.754139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.530 [2024-07-15 22:04:44.754573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.530 [2024-07-15 22:04:44.754590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.530 [2024-07-15 22:04:44.754596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.530 [2024-07-15 22:04:44.754759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.530 [2024-07-15 22:04:44.754922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.530 [2024-07-15 22:04:44.754932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.530 [2024-07-15 22:04:44.754938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.530 [2024-07-15 22:04:44.757629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.530 [2024-07-15 22:04:44.767180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.530 [2024-07-15 22:04:44.767541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.530 [2024-07-15 22:04:44.767583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.530 [2024-07-15 22:04:44.767606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.530 [2024-07-15 22:04:44.768185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.530 [2024-07-15 22:04:44.768787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.530 [2024-07-15 22:04:44.768798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.530 [2024-07-15 22:04:44.768805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.789 [2024-07-15 22:04:44.771569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.789 [2024-07-15 22:04:44.780086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.780561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.780578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.780585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.780749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.780913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.780922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.780929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.783625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.793090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.793589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.793606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.793613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.793777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.793941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.793950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.793961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.796706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.806072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.806476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.806493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.806501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.806665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.806828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.806838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.806844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.809603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.819098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.819508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.819551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.819573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.820002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.820176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.820185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.820192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.822824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.831975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.832470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.832517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.832541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.832801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.832967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.832977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.832983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.835619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.844920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.845387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.845404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.845411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.845583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.845755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.845764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.845770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.848585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.857953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.858442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.858459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.858470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.858634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.858797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.858806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.858813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.861438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.870800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.871263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.871280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.871287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.871449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.871612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.871621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.871627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.874301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.883672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.884123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.884166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.884189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.790 [2024-07-15 22:04:44.884762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.790 [2024-07-15 22:04:44.884938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.790 [2024-07-15 22:04:44.884948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.790 [2024-07-15 22:04:44.884955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.790 [2024-07-15 22:04:44.887591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.790 [2024-07-15 22:04:44.896578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.790 [2024-07-15 22:04:44.897045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.790 [2024-07-15 22:04:44.897087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.790 [2024-07-15 22:04:44.897108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.897703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.898182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.898195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.898202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.900826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.909373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.909826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.909843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.909850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.910012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.910175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.910184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.910190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.912886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.922258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.922720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.922737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.922744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.923065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.923239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.923250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.923273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.925940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.935059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.935526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.935544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.935551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.935714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.935878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.935887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.935893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.938587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.947964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.948432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.948448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.948455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.948620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.948783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.948792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.948799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.951494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.960846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.961331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.961374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.961396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.961976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.962191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.962200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.962206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.964901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.973780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.974245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.974289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.974311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.974891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.975415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.975426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.975433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.978157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.986666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:44.987116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:44.987159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:44.987181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:44.987740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:44.987996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:44.988009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:44.988018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:44.992076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:44.999995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:45.000394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:45.000411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:45.000418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:45.000586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:45.000754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:45.000764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.791 [2024-07-15 22:04:45.000770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.791 [2024-07-15 22:04:45.003507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.791 [2024-07-15 22:04:45.012802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.791 [2024-07-15 22:04:45.013278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.791 [2024-07-15 22:04:45.013322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.791 [2024-07-15 22:04:45.013344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.791 [2024-07-15 22:04:45.013924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.791 [2024-07-15 22:04:45.014195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.791 [2024-07-15 22:04:45.014204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.792 [2024-07-15 22:04:45.014211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.792 [2024-07-15 22:04:45.016956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.792 [2024-07-15 22:04:45.025698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.792 [2024-07-15 22:04:45.026154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.792 [2024-07-15 22:04:45.026198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:50.792 [2024-07-15 22:04:45.026221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:50.792 [2024-07-15 22:04:45.026823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:50.792 [2024-07-15 22:04:45.027313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:50.792 [2024-07-15 22:04:45.027324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:50.792 [2024-07-15 22:04:45.027335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.051 [2024-07-15 22:04:45.030120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.051 [2024-07-15 22:04:45.038695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.051 [2024-07-15 22:04:45.039147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.051 [2024-07-15 22:04:45.039190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.051 [2024-07-15 22:04:45.039212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.051 [2024-07-15 22:04:45.039809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.051 [2024-07-15 22:04:45.040397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.051 [2024-07-15 22:04:45.040407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.051 [2024-07-15 22:04:45.040414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.051 [2024-07-15 22:04:45.043021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.051 [2024-07-15 22:04:45.051531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.051 [2024-07-15 22:04:45.051992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.051 [2024-07-15 22:04:45.052010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.051 [2024-07-15 22:04:45.052017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.051 [2024-07-15 22:04:45.052179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.051 [2024-07-15 22:04:45.052349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.051 [2024-07-15 22:04:45.052358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.051 [2024-07-15 22:04:45.052364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.051 [2024-07-15 22:04:45.055027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.051 [2024-07-15 22:04:45.064467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.051 [2024-07-15 22:04:45.064906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.051 [2024-07-15 22:04:45.064922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.051 [2024-07-15 22:04:45.064929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.051 [2024-07-15 22:04:45.065093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.051 [2024-07-15 22:04:45.065278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.051 [2024-07-15 22:04:45.065288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.065296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.067963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.077408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.077886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.077937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.077960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.078468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.078644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.078653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.078660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.082453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.090830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.091277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.091294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.091302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.091469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.091636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.091645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.091651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.094384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.103713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.104125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.104142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.104149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.104337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.104509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.104520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.104526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.107381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.116637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.117022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.117065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.117087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.117572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.117750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.117760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.117766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.120440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.129577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.130047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.130090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.130112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.130562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.130726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.130736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.130743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.133421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.142455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.142930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.142973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.142995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.143535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.143709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.143719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.143725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.146370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.155379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.155848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.155890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.155911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.156399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.156574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.156583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.156590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.159250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.168216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.168679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.168695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.168702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.168865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.052 [2024-07-15 22:04:45.169028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.052 [2024-07-15 22:04:45.169037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.052 [2024-07-15 22:04:45.169044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.052 [2024-07-15 22:04:45.171643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.052 [2024-07-15 22:04:45.181045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.052 [2024-07-15 22:04:45.181437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.052 [2024-07-15 22:04:45.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.052 [2024-07-15 22:04:45.181459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.052 [2024-07-15 22:04:45.181623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.181785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.181794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.181801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.184492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.193915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.194387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.194431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.194453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.194934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.195098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.195107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.195113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.197805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.206874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.207339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.207356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.207369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.207532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.207696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.207704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.207711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.210407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.219918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.220390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.220407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.220413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.220576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.220739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.220748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.220754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.223444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.232831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.233299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.233316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.233323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.233486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.233649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.233658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.233665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.236360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.245636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.246084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.246128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.246150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.246653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.246828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.246840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.246848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.249486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.258555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.259012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.259055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.259077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.259666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.259841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.259849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.259855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.262494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.271463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.271938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.271981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.272003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.272598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.273026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.273035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.273042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.275703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.053 [2024-07-15 22:04:45.284518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.053 [2024-07-15 22:04:45.284996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.053 [2024-07-15 22:04:45.285013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.053 [2024-07-15 22:04:45.285020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.053 [2024-07-15 22:04:45.285197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.053 [2024-07-15 22:04:45.285380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.053 [2024-07-15 22:04:45.285390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.053 [2024-07-15 22:04:45.285397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.053 [2024-07-15 22:04:45.288242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.313 [2024-07-15 22:04:45.297672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.313 [2024-07-15 22:04:45.298149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.313 [2024-07-15 22:04:45.298168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.313 [2024-07-15 22:04:45.298176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.313 [2024-07-15 22:04:45.298362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.313 [2024-07-15 22:04:45.298540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.313 [2024-07-15 22:04:45.298551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.313 [2024-07-15 22:04:45.298559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.313 [2024-07-15 22:04:45.301398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.313 [2024-07-15 22:04:45.310762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.313 [2024-07-15 22:04:45.311237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.313 [2024-07-15 22:04:45.311254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.313 [2024-07-15 22:04:45.311262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.313 [2024-07-15 22:04:45.311440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.313 [2024-07-15 22:04:45.311618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.313 [2024-07-15 22:04:45.311628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.313 [2024-07-15 22:04:45.311636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.313 [2024-07-15 22:04:45.314475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.313 [2024-07-15 22:04:45.323841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.313 [2024-07-15 22:04:45.324320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.313 [2024-07-15 22:04:45.324338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.313 [2024-07-15 22:04:45.324345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.313 [2024-07-15 22:04:45.324523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.313 [2024-07-15 22:04:45.324701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.313 [2024-07-15 22:04:45.324709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.313 [2024-07-15 22:04:45.324716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.313 [2024-07-15 22:04:45.327552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.336942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.337335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.337354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.337361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.337543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.337720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.337730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.337737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.340573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.350090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.350556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.350573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.350580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.350759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.350938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.350948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.350956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.353791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.363166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.363624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.363642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.363649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.363826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.364004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.364014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.364021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.366857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.376228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.376635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.376652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.376659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.376837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.377017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.377026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.377036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.379872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.389406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.389881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.389899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.389906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.390083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.390267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.390276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.390283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.393113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.402465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.402865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.402882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.402889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.403068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.403251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.403261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.403268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.406102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.415635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.416109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.416126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.416133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.416317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.416495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.416503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.416510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.419344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.428767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.429232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.429253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.429260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.429438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.429619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.429628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.429636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.432469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.441820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.442275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.442293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.442301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.442478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.442655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.442665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.442671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.445510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.454876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.455290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.455307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.455314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.455493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.455671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.455681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.455687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.458524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.314 [2024-07-15 22:04:45.468046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.314 [2024-07-15 22:04:45.468523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.314 [2024-07-15 22:04:45.468540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.314 [2024-07-15 22:04:45.468548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.314 [2024-07-15 22:04:45.468725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.314 [2024-07-15 22:04:45.468908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.314 [2024-07-15 22:04:45.468918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.314 [2024-07-15 22:04:45.468924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.314 [2024-07-15 22:04:45.471760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 [2024-07-15 22:04:45.481167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.481648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.481665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.481673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.481851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.482028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.482038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.482045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 [2024-07-15 22:04:45.484886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 [2024-07-15 22:04:45.494265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.494743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.494761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.494768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.494945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.495123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.495133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.495140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 [2024-07-15 22:04:45.497976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 [2024-07-15 22:04:45.507355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.507834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.507852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.507859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.508037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.508216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.508231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.508240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 [2024-07-15 22:04:45.511078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 [2024-07-15 22:04:45.520466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.520928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.520946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.520953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.521131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.521313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.521323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.521330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 [2024-07-15 22:04:45.524163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 [2024-07-15 22:04:45.533603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.534064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.534082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.534089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.534281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.534473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.534483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.534490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3845824 Killed "${NVMF_APP[@]}" "$@" 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.315 [2024-07-15 22:04:45.537360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3847248 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3847248 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3847248 ']' 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.315 22:04:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.315 [2024-07-15 22:04:45.546792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.315 [2024-07-15 22:04:45.547254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.315 [2024-07-15 22:04:45.547272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.315 [2024-07-15 22:04:45.547280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.315 [2024-07-15 22:04:45.547465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.315 [2024-07-15 22:04:45.547645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.315 [2024-07-15 22:04:45.547654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.315 [2024-07-15 22:04:45.547661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.315 [2024-07-15 22:04:45.550515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.559937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.560421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.560439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.560447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.560626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.560806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.560816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.560823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.563656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.573025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.573407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.573425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.573433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.573611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.573788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.573797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.573804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.576651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.586107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.586565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.586582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.586593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.586767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.586940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.586950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.586957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.588762] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:26:51.575 [2024-07-15 22:04:45.588802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.575 [2024-07-15 22:04:45.589804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.599267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.599604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.599621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.599628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.599801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.599974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.599983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.599990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.602808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.612419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.612759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.612776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.612784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.612977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.613158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.613167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.613174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.575 [2024-07-15 22:04:45.616131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.625511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.625856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.625875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.625883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.626065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.626249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.626259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.626266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.629093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.638545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.639007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.639025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.639032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.639205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.639383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.639393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.639399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.642221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.648630] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.575 [2024-07-15 22:04:45.651640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.652044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.652062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.652069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.652248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.652421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.652431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.652438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.655239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.664746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.665150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.665168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.665175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.665359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.665546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.575 [2024-07-15 22:04:45.665560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.575 [2024-07-15 22:04:45.665567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.575 [2024-07-15 22:04:45.668335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.575 [2024-07-15 22:04:45.677770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.575 [2024-07-15 22:04:45.678253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.575 [2024-07-15 22:04:45.678271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.575 [2024-07-15 22:04:45.678279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.575 [2024-07-15 22:04:45.678457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.575 [2024-07-15 22:04:45.678636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.678645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.678652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.681439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.690833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.691311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.691332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.691340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.691526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.691701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.691711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.691718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.694505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.703899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.704282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.704301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.704308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.704497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.704670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.704680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.704687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.707475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.717054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.717550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.717567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.717575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.717753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.717931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.717940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.717947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.720782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.730145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.730463] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.576 [2024-07-15 22:04:45.730489] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.576 [2024-07-15 22:04:45.730496] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.576 [2024-07-15 22:04:45.730502] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.576 [2024-07-15 22:04:45.730508] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.576 [2024-07-15 22:04:45.730545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.576 [2024-07-15 22:04:45.730644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.730661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.730669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same w[2024-07-15 22:04:45.730628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.576 ith the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.730630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.576 [2024-07-15 22:04:45.730851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.731028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.731037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.731044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.733880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.743244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.743752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.743773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.743781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.743962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.744141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.744156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.744164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.747001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.756374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.756855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.756876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.756885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.757064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.757249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.757259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.757266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.760091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.769459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.769941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.769962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.769969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.770148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.770333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.770343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.770350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.773178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.782550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.783049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.783070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.783079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.783263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.783442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.783452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.783460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.786292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.795841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.796266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.796284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.796292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.576 [2024-07-15 22:04:45.796472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.576 [2024-07-15 22:04:45.796651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.576 [2024-07-15 22:04:45.796661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.576 [2024-07-15 22:04:45.796669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.576 [2024-07-15 22:04:45.799505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.576 [2024-07-15 22:04:45.809037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.576 [2024-07-15 22:04:45.809432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.576 [2024-07-15 22:04:45.809450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.576 [2024-07-15 22:04:45.809458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.577 [2024-07-15 22:04:45.809636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.577 [2024-07-15 22:04:45.809814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.577 [2024-07-15 22:04:45.809823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.577 [2024-07-15 22:04:45.809830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.577 [2024-07-15 22:04:45.812669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.822213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.822629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.822647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.822654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.822832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.823010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.823020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.823027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.825863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.835377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.835662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.835679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.835687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.835868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.836048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.836057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.836064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.838900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.848426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.848906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.848923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.848931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.849109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.849292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.849302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.849308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.852131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.861489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.861952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.861970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.861977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.862154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.862337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.862347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.862356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.865182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.874543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.875021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.875039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.875047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.875238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.875420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.875430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.875443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.878271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.887631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.888006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.888023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.888031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.888207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.888390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.888399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.888407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.891234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.900756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.901097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.901114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.901121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.901303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.901482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.901491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.901498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.904335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.913846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.914325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.914343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.914350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.914534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.914708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.914718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.914724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.917559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.926944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.927368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.927389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.927397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.927575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.927754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.927763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.927770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.930601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.940122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.940602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.940620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.940628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.940805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.940984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.940994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.941001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.943834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.836 [2024-07-15 22:04:45.953193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.836 [2024-07-15 22:04:45.953652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.836 [2024-07-15 22:04:45.953670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.836 [2024-07-15 22:04:45.953677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.836 [2024-07-15 22:04:45.953855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.836 [2024-07-15 22:04:45.954034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.836 [2024-07-15 22:04:45.954043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.836 [2024-07-15 22:04:45.954052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.836 [2024-07-15 22:04:45.956884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:45.966223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:45.966682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:45.966699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:45.966707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:45.966884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:45.967066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:45.967076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:45.967083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:45.969914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:45.979260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:45.979740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:45.979757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:45.979765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:45.979942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:45.980121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:45.980130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:45.980138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:45.982967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:45.992317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:45.992737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:45.992754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:45.992763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:45.992940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:45.993118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:45.993128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:45.993134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:45.995966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.005480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.005937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.005955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.005963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.006140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.006323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.006333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.006339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.009169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.018525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.018938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.018956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.018964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.019141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.019324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.019334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.019342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.022174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.031710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.032166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.032184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.032191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.032374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.032553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.032563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.032569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.035396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.044750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.045207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.045229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.045237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.045414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.045592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.045602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.045609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.048439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.057819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.058207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.058229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.058241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.058419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.058598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.058608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.058615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.061444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.837 [2024-07-15 22:04:46.070965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.837 [2024-07-15 22:04:46.071264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.837 [2024-07-15 22:04:46.071281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:51.837 [2024-07-15 22:04:46.071289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:51.837 [2024-07-15 22:04:46.071467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:51.837 [2024-07-15 22:04:46.071645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.837 [2024-07-15 22:04:46.071654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.837 [2024-07-15 22:04:46.071661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.837 [2024-07-15 22:04:46.074498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.096 [2024-07-15 22:04:46.084081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.096 [2024-07-15 22:04:46.084571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.096 [2024-07-15 22:04:46.084590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.096 [2024-07-15 22:04:46.084597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.096 [2024-07-15 22:04:46.084776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.096 [2024-07-15 22:04:46.084953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.096 [2024-07-15 22:04:46.084963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.096 [2024-07-15 22:04:46.084969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.096 [2024-07-15 22:04:46.087797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.096 [2024-07-15 22:04:46.097159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.096 [2024-07-15 22:04:46.097623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.096 [2024-07-15 22:04:46.097640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.096 [2024-07-15 22:04:46.097648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.096 [2024-07-15 22:04:46.097825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.096 [2024-07-15 22:04:46.098004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.096 [2024-07-15 22:04:46.098018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.096 [2024-07-15 22:04:46.098025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.096 [2024-07-15 22:04:46.100858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.096 [2024-07-15 22:04:46.110214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.096 [2024-07-15 22:04:46.110631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.096 [2024-07-15 22:04:46.110649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.096 [2024-07-15 22:04:46.110657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.096 [2024-07-15 22:04:46.110835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.096 [2024-07-15 22:04:46.111014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.096 [2024-07-15 22:04:46.111023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.096 [2024-07-15 22:04:46.111030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.113862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.123385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.123834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.123851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.123858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.124037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.124214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.124222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.124235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.127062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.136579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.137053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.137071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.137079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.137261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.137439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.137449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.137455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.140286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.149648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.150132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.150149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.150157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.150339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.150518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.150527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.150534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.153362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.162723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.163201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.163218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.163230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.163407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.163585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.163594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.163601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.166435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.175773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.176248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.176265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.176273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.176450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.176628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.176638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.176644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.179474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.188819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.189276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.189294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.189302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.189483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.189661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.189671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.189678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.192503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.201847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.202320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.202338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.202345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.202523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.202701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.202711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.202717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.205550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.214905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.215379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.215396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.215415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.215588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.215761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.215771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.215777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.218610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.227961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.228430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.228448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.228455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.228634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.228811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.228820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.228830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.231662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.241013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.241492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.241508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.241516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.241694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.241872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.241880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-15 22:04:46.241887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-15 22:04:46.244729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-15 22:04:46.254084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-15 22:04:46.254568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-15 22:04:46.254585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-15 22:04:46.254592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.097 [2024-07-15 22:04:46.254769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.097 [2024-07-15 22:04:46.254947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-15 22:04:46.254955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.254962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.257792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.267149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.267634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.267650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.267657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.267834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.268012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.268020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.268027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.270858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.280208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.280619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.280639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.280646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.280823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.281000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.281008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.281015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.283848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.293361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.293842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.293858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.293865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.294042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.294220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.294233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.294240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.297065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.306444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.306913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.306929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.306936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.307113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.307295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.307304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.307310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.310138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.319487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.319971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.319988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.319995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.320173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.320360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.320369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.320375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.323208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.098 [2024-07-15 22:04:46.332560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.098 [2024-07-15 22:04:46.333005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.098 [2024-07-15 22:04:46.333022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.098 [2024-07-15 22:04:46.333030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.098 [2024-07-15 22:04:46.333207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.098 [2024-07-15 22:04:46.333390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.098 [2024-07-15 22:04:46.333400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.098 [2024-07-15 22:04:46.333408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.098 [2024-07-15 22:04:46.336261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-15 22:04:46.345667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-15 22:04:46.346155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-15 22:04:46.346172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-15 22:04:46.346181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.357 [2024-07-15 22:04:46.346363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.357 [2024-07-15 22:04:46.346542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-15 22:04:46.346551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-15 22:04:46.346558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-15 22:04:46.349391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-15 22:04:46.358746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-15 22:04:46.359235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-15 22:04:46.359252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-15 22:04:46.359259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.357 [2024-07-15 22:04:46.359436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.357 [2024-07-15 22:04:46.359613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-15 22:04:46.359623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-15 22:04:46.359630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-15 22:04:46.362463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-15 22:04:46.371819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-15 22:04:46.372299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-15 22:04:46.372316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-15 22:04:46.372323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.357 [2024-07-15 22:04:46.372500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.357 [2024-07-15 22:04:46.372678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-15 22:04:46.372686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-15 22:04:46.372693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-15 22:04:46.375528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-15 22:04:46.384887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-15 22:04:46.385250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-15 22:04:46.385266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-15 22:04:46.385274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.357 [2024-07-15 22:04:46.385451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.357 [2024-07-15 22:04:46.385628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.385636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.385643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.388476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-15 22:04:46.397998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.398482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.398498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.398505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.398681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.398859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.398867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.398873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.401703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 [2024-07-15 22:04:46.411057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.411437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.411454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.411461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.411637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.411814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.411823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.411829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.414664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-15 22:04:46.424193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.424619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.424637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.424644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.424822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.425000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.425009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.425015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.427845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-15 22:04:46.437377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.437718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.437734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.437741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.437918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.438096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.438107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.438116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.440952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 [2024-07-15 22:04:46.446874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.358 [2024-07-15 22:04:46.450476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.450862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.450879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.450886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.451062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.451242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.451250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.451256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 [2024-07-15 22:04:46.454084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-15 22:04:46.463613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.464086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.464103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.464110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.464291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.464469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.464483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.464490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.467323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-15 22:04:46.476685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.477161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.477177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.477185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.477367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.477544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.477553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.477559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.480396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 Malloc0 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 [2024-07-15 22:04:46.489745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.490234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.490250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.490257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.358 [2024-07-15 22:04:46.490434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.358 [2024-07-15 22:04:46.490612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-15 22:04:46.490621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-15 22:04:46.490627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-15 22:04:46.493457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.358 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 [2024-07-15 22:04:46.502816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-15 22:04:46.503298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-15 22:04:46.503314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55ad0 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-15 22:04:46.503321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ad0 is same with the state(5) to be set 00:26:52.359 [2024-07-15 22:04:46.503499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55ad0 (9): Bad file descriptor 00:26:52.359 [2024-07-15 22:04:46.503676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.359 [2024-07-15 22:04:46.503684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.359 [2024-07-15 22:04:46.503691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-15 22:04:46.506524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.359 [2024-07-15 22:04:46.511762] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.359 [2024-07-15 22:04:46.515874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.359 22:04:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3846191 00:26:52.617 [2024-07-15 22:04:46.623989] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:02.612 00:27:02.612 Latency(us) 00:27:02.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.612 Verification LBA range: start 0x0 length 0x4000 00:27:02.612 Nvme1n1 : 15.01 8048.49 31.44 12776.87 0.00 6126.58 651.80 18578.03 00:27:02.612 =================================================================================================================== 00:27:02.612 Total : 8048.49 31.44 12776.87 0.00 6126.58 651.80 18578.03 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.612 rmmod nvme_tcp 00:27:02.612 rmmod nvme_fabrics 00:27:02.612 rmmod nvme_keyring 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3847248 ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3847248 ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3847248' 00:27:02.612 killing process with pid 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3847248 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.612 22:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.548 22:04:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.548 00:27:03.548 real 0m25.355s 00:27:03.548 user 1m2.534s 00:27:03.548 sys 0m5.578s 00:27:03.548 22:04:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:03.548 22:04:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.548 ************************************ 00:27:03.549 END TEST nvmf_bdevperf 00:27:03.549 ************************************ 00:27:03.549 22:04:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:03.549 22:04:57 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:03.549 22:04:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:03.549 22:04:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.549 22:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.549 ************************************ 00:27:03.549 START TEST nvmf_target_disconnect 00:27:03.549 ************************************ 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:03.549 * Looking for test storage... 00:27:03.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.549 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.809 22:04:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.081 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.081 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.082 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.082 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.082 22:05:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:27:09.082 00:27:09.082 --- 10.0.0.2 ping statistics --- 00:27:09.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.082 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:27:09.082 00:27:09.082 --- 10.0.0.1 ping statistics --- 00:27:09.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.082 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:09.082 ************************************ 00:27:09.082 START TEST nvmf_target_disconnect_tc1 00:27:09.082 ************************************ 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.082 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.082 [2024-07-15 22:05:03.227124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-15 22:05:03.227166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf97ed0 with addr=10.0.0.2, port=4420 00:27:09.082 [2024-07-15 22:05:03.227205] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:09.082 [2024-07-15 22:05:03.227214] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:09.082 [2024-07-15 22:05:03.227219] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:09.082 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:09.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:09.082 Initializing NVMe Controllers 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:09.082 00:27:09.082 real 0m0.100s 00:27:09.082 user 0m0.044s 00:27:09.082 sys 0m0.055s 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.082 ************************************ 00:27:09.082 END TEST nvmf_target_disconnect_tc1 00:27:09.082 ************************************ 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.082 22:05:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:09.082 ************************************ 00:27:09.082 START TEST nvmf_target_disconnect_tc2 00:27:09.082 ************************************ 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3852271 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3852271 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3852271 ']' 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.083 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.342 [2024-07-15 22:05:03.342458] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:27:09.342 [2024-07-15 22:05:03.342499] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.342 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.342 [2024-07-15 22:05:03.406030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.342 [2024-07-15 22:05:03.504238] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.342 [2024-07-15 22:05:03.504279] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.342 [2024-07-15 22:05:03.504291] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.342 [2024-07-15 22:05:03.504316] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.342 [2024-07-15 22:05:03.504324] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.342 [2024-07-15 22:05:03.504446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:09.342 [2024-07-15 22:05:03.504875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:09.342 [2024-07-15 22:05:03.504967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.342 [2024-07-15 22:05:03.504965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 Malloc0 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 [2024-07-15 22:05:03.666998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 [2024-07-15 22:05:03.699251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3852297 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:09.601 22:05:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.601 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.508 22:05:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3852271 00:27:11.508 22:05:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Read completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.508 Write completed with error (sct=0, sc=8) 00:27:11.508 starting I/O failed 00:27:11.509 [2024-07-15 22:05:05.726701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 [2024-07-15 22:05:05.726900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 [2024-07-15 22:05:05.727100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Read completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 Write completed with error (sct=0, sc=8) 00:27:11.509 starting I/O failed 00:27:11.509 [2024-07-15 22:05:05.727300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.509 [2024-07-15 22:05:05.727528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.727544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.727700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.727710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.727963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.727973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.728167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.728177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.728345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.728356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.728458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.728467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.728622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.509 [2024-07-15 22:05:05.728632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.509 qpair failed and we were unable to recover it. 00:27:11.509 [2024-07-15 22:05:05.728752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.728763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.728888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.728897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.729116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.729146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.729340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.729370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.729562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.729593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.729762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.729884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.729897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.730912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.730922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.731886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.731896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.732958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.732969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.733881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.733890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.734010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.734023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.734128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.734138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.734312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.734322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.734442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.734452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.510 [2024-07-15 22:05:05.734575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.510 [2024-07-15 22:05:05.734586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.510 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.734694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.734704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.734837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.734847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.734982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.734992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.735983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.735996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.736981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.736994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.737970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.737999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.738178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.738208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.738368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.738398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.738554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.738584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.738822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.738852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.739103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.739133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.739364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.739394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.739621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.739651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.739818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.739847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.740980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.740992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.741115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.741127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.741316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.741328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.741428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.511 [2024-07-15 22:05:05.741439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.511 qpair failed and we were unable to recover it. 00:27:11.511 [2024-07-15 22:05:05.741571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.741583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.741713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.741725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.741842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.741855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.742937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.742949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.743150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.743163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.743300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.743314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.743458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.743487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.743659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.743688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.743851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.743880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.744863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.744892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.745135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.745380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.745393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.745507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.745520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.745743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.745756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.745888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.745918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.746064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.746077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.746354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.746367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.746517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.746547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.746680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.746710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.746856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.746885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.747069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.747098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.747305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.747319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.512 [2024-07-15 22:05:05.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.512 [2024-07-15 22:05:05.747600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.512 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.747764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.747795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.748017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.748047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.748205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.748282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.748452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.748483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.748653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.748684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.748897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.748910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.749125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.789 [2024-07-15 22:05:05.749155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.789 qpair failed and we were unable to recover it. 00:27:11.789 [2024-07-15 22:05:05.749334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.749678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.749708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.749879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.749893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.750097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.750126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.750279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.750315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.750520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.750744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.750774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.751020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.751049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.751214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.751231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.751373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.751402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.751560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.751589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.751814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.751843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.752061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.752074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.752337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.752367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.752629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.752658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.752885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.752898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.753100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.753130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.753306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.753336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.753502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.753532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.753771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.753801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.753968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.753997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.754278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.754308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.754536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.754565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.754727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.754756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.754918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.754958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.755154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.755287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.755463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.755676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.755808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.755993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.756023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.756194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.756223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.756395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.756425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.790 [2024-07-15 22:05:05.756649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.790 [2024-07-15 22:05:05.756679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.790 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.756839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.756869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.757024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.757053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.757274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.757288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.757425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.757454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.757602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.757632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.757801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.757831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.758044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.758057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.758174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.758187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.758374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.758404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.758628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.758657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.758879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.758895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.759087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.759101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.759295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.759309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.759500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.759530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.759693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.759723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.759985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.760015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.760175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.760204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.760431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.760463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.760689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.760719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.760949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.760979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.761132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.761146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.761371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.761401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.761635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.761665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.761880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.761909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.762072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.762102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.762362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.762393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.762549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.762578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.762737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.762767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.762933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.762962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.763186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.763463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.763494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.763648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.763897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.763911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.764049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.764078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.791 [2024-07-15 22:05:05.764327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.791 [2024-07-15 22:05:05.764357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.791 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.764521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.764552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.764774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.764803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.764968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.764998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.765232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.765263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.765441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.765470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.765686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.765715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.765949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.765962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.766095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.766108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.766252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.766266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.766531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.766544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.766675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.766689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.766885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.766914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.767087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.767117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.767278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.767308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.767566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.767596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.767815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.767850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.768914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.768930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.769063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.769077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.769280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.769310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.769475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.769505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.769655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.769685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.769842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.769872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.770041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.770071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.770243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.770273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.770440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.770470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.770690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.770720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.770876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.770906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.771079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.771108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.771407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.792 [2024-07-15 22:05:05.771421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.792 qpair failed and we were unable to recover it. 00:27:11.792 [2024-07-15 22:05:05.771560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.771573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.771724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.771738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.771864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.771878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.772917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.772931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.773131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.773144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.773358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.773372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.773576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.773606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.773823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.773853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.774005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.774035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.774203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.774240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.774466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.774496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.774749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.774778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.774945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.774959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.775089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.775103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.775362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.775511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.775530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.775741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.775771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.775926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.775956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.776142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.776321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.776446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.776592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.776794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.776986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.777145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.777306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.777407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.777601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.777804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.777834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.793 [2024-07-15 22:05:05.778003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.793 [2024-07-15 22:05:05.778034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.793 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.778258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.778288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.778512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.778541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.778694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.778723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.779888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.779901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.780092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.780106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.780288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.780302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.780434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.780447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.780603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.780638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.780862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.780888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.781862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.781872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.782000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.782009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.782125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.782135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.782263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.794 [2024-07-15 22:05:05.782278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.794 qpair failed and we were unable to recover it. 00:27:11.794 [2024-07-15 22:05:05.782395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.782405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.782520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.782533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.782726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.782736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.782919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.782930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.783907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.783917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.784909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.784919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.785967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.785977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.786113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.786122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.786326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.786336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.786453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.786463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.786657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.786677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.786804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.786818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.787857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.787890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.788179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.788209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.788455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.788486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.795 [2024-07-15 22:05:05.788648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.795 [2024-07-15 22:05:05.788678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.795 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.788836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.788866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.789132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.789296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.789327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.789674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.789704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.789993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.790023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.790308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.790322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.790594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.790608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.790750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.790764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.790960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.790974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.791110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.791123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.791269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.791283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.791409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.791423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.791699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.791729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.791958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.791987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.792261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.792292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.792464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.792493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.792643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.792679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.792845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.792858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.792980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.792994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.793213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.793230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.793446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.793459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.793734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.793748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.793897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.793910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.794144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.794174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.794355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.794386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.794638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.794668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.794836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.794866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.794994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.795024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.795197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.795235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.795418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.795449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.795686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.795716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.795942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.795972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.796141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.796155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.796 [2024-07-15 22:05:05.796278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.796 [2024-07-15 22:05:05.796293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.796 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.796502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.796516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.796726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.796739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.796875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.796918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.797141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.797171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.797341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.797371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.797586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.797616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.797764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.797795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.798069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.798234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.798379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.798577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.798778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.798978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.799008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.799293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.799323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.799566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.799596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.799826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.799855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.800139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.800170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.800343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.800375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.800595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.800625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.800913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.800943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.801192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.801206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.801393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.801407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.801548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.801589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.801745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.802011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.802040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.802216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.802235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.802426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.802440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.802687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.802700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.802892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.802906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.803098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.803111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.803377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.803392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.803604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.803633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.797 qpair failed and we were unable to recover it. 00:27:11.797 [2024-07-15 22:05:05.803874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.797 [2024-07-15 22:05:05.803903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.804196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.804232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.804468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.804498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.804780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.804810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.805178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.805212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.805488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.805519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.805736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.805766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.806077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.806106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.806342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.806372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.806545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.806576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.806844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.806874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.807100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.807129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.807453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.807467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.807743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.807756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.807956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.807970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.808171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.808185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.808383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.808406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.808543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.808557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.808765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.808779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.808974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.808988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.809179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.809193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.809464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.809495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.809674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.809704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.810024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.810054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.810272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.810302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.810520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.810550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.810728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.810759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.811046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.811076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.811260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.811274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.811489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.811502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.811632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.811646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.811847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.811861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.812044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.812058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.812181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.812195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.812388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.812402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.798 [2024-07-15 22:05:05.812549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.798 [2024-07-15 22:05:05.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.798 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.812675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.812688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.812832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.812846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.813938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.813952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.814142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.814156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.814563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.814595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.814760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.814790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.814928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.814958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.815129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.815159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.815326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.815339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.815549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.815563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.815689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.815703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.815875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.815904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.816125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.816155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.816319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.816350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.816579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.816593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.816785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.816799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.816926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.816939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.817136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.817149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.817287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.817302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.817569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.817599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.817757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.817787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.817954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.817984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.818159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.799 qpair failed and we were unable to recover it. 00:27:11.799 [2024-07-15 22:05:05.818367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.799 [2024-07-15 22:05:05.818397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.818651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.818681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.818841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.818871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.819049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.819079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.819301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.819486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.819519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.819760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.819790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.819954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.819985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.820202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.820250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.800 qpair failed and we were unable to recover it. 00:27:11.800 [2024-07-15 22:05:05.820413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.800 [2024-07-15 22:05:05.820426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.820708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.820738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.820895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.820925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.821081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.821111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.821269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.821283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.821482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.821511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.821699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.821728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.821954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.821984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.822264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.822278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.822460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.822474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.822611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.822625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.822755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.822768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.822898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.822912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.823109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.823123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.823309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.823323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.823451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.823465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.823716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.823729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.823860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.823874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.824013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.824027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.824207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.824271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.824448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.824478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.824629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.824659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.824884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.824926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.825115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.825329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.825343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.825595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.825608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.825744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.825760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.825860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.825873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.801 [2024-07-15 22:05:05.826895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.801 [2024-07-15 22:05:05.826925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.801 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.827076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.827105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.827357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.827388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.827620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.827650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.827824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.827854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.828933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.829146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.829160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.829347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.829377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.829604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.829634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.829886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.829916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.830142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.830172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.830406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.830437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.830589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.830618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.830866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.830896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.831926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.831940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.832965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.832979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.833174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.833188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.833313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.833327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.802 [2024-07-15 22:05:05.833508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.802 [2024-07-15 22:05:05.833521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.802 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.833716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.833730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.833930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.833944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.834028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.834040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.834287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.834302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.834433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.834446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.834633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.834647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.834845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.834859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.835079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.835109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.835333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.835364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.835592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.835622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.835775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.835805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.836043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.836073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.836305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.836335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.836561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.836592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.836747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.836777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.837032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.837062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.837387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.837417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.837569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.837599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.837853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.837883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.838164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.838177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.838331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.838345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.838550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.838581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.838747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.838778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.838995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.839234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.839495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.839650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.839800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.839946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.839960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.840104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.840117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.840318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.840331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.840520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.840550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.840773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.840803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.841028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 22:05:05.841292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.803 [2024-07-15 22:05:05.841306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.841489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.841502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.841622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.841636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.841817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.841831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.842926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.842939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.843072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.843086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.843340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.843354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.843541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.843555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.843833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.843847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.844040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.844053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.844304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.844317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.844464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.844477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.844691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.844705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.844906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.844922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.845126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.845140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.845272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.845482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.845512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.845666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.845695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.845987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.846016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.846319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.846333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.846536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.846549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.846677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.846690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.846949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.846979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.847204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.847241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.847472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.847502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.847656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.847686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.847849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.847879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.848027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.848041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.848253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.804 [2024-07-15 22:05:05.848283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.804 qpair failed and we were unable to recover it. 00:27:11.804 [2024-07-15 22:05:05.848568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.848597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.848731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.848761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.849086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.849116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.849279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.849310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.849612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.849626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.849823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.849837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.850109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.850123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.850322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.850336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.850464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.850477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.850626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.850640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.850886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.850900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.851031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.851047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.851180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.851194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.851384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.851397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.851589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.851603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.851872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.851885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.852002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.852016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.852234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.852249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.852442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.852456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.852592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.852791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.852823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.853070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.853188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.853218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.853477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.853507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.853673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.853703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.853965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.854004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.854206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.854219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.854423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.854437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.854661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.805 [2024-07-15 22:05:05.854675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.805 qpair failed and we were unable to recover it. 00:27:11.805 [2024-07-15 22:05:05.854808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.854828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.855086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.855099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.855312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.855326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.855537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.855551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.855743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.855756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.855940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.855954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.856177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.856215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.856548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.856578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.856895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.856924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.857097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.857127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.857301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.857546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.857559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.857745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.857758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.858020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.858050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.858285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.858316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.858536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.858550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.858740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.858753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.859008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.859022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.859272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.859286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.859483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.859496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.859688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.859702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.859947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.860091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.860105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.860289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.860323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.860495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.860525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.860787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.860818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.861122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.861152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.861387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.861422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.861643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.861657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.861880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.861893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.862109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.862123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.862392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.862406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.862599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.862612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.862795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.862809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.862954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.862995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.863212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.863252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.806 qpair failed and we were unable to recover it. 00:27:11.806 [2024-07-15 22:05:05.863558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.806 [2024-07-15 22:05:05.863588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.863846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.863876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.864105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.864135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.864364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.864378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.864509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.864523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.864706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.864719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.865021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.865051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.865203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.865239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.865464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.865494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.865671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.865702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.865886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.865916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.866155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.866184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.866425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.866456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.866755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.866784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.867092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.867127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.867416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.867448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.867694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.867725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.867951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.867981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.868210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.868249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.868470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.868501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.868815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.868844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.869062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.869092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.869398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.869412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.869613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.869627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.869824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.869838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.869964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.869977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.870203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.870216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.870427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.870442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.870692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.870706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.870828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.870842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.870974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.870987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.871115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.871129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.871260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.871274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.871477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.871491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.871680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.871694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-15 22:05:05.871832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.807 [2024-07-15 22:05:05.871846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.871969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.871983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.872189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.872203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.872317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.872331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.872464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.872478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.872676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.872689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.872922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.873240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.873270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.873429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.873442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.873583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.873597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.873796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.873810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.874057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.874071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.874269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.874283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.874411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.874425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.874652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.874681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.874904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.874934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.875184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.875214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.875387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.875417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.875652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.875682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.875900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.876241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.876272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.876442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.876472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.876710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.876739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.877026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.877056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.877366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.877380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.877632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.877646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.877911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.877924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.878176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.878190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.878397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.878412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.878580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.878609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.878822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.878852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.879011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.879041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.879335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.808 [2024-07-15 22:05:05.879365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-15 22:05:05.879688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.879717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.879974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.880200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.880416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.880555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.880693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.880954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.880967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.881099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.881112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.881382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.881412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.881571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.881601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.881830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.881860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.882164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.882193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.882351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.882366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.882546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.882577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.882742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.882773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.883144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.883441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.883472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.883728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.883758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.883973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.884004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.884242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.884274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.884491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.884504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.884702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.884716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.884908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.884938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.885243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.885274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.885534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.885564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.885737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.885768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.885896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.885927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.886179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.886208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.886499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.886529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.886768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.886782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.886981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.886994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.887124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.887138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.887388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.887420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.887706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.887736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.888020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.888050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.888264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.809 [2024-07-15 22:05:05.888294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.809 qpair failed and we were unable to recover it. 00:27:11.809 [2024-07-15 22:05:05.888574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.888588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.888783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.888798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.888989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.889003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.889194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.889208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.889514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.889543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.889772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.889808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.889957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.889987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.890157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.890186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.890439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.890470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.890712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.890742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.890971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.891001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.891306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.891338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.891609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.891622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.891905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.891919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.892192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.892205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.892412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.892425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.892604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.892618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.892853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.892883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.893214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.893253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.893487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.893501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.893756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.893770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.894912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.894925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.810 qpair failed and we were unable to recover it. 00:27:11.810 [2024-07-15 22:05:05.895107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.810 [2024-07-15 22:05:05.895120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.895318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.895333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.895516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.895546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.895695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.895724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.895957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.895987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.896219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.896261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.896478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.896509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.896815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.896829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.896944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.896958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.897154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.897184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.897363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.897393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.897626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.897656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.897989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.898018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.898327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.898357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.898572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.898602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.898776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.898806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.899025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.899055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.899362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.899393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.899642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.899655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.899931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.899945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.900061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.900075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.900275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.900289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.900541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.900554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.900716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.900746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.900978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.901008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.901300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.901314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.901496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.901510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.901694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.901708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.901939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.901969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.902184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.902213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.902556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.902586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.902821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.811 [2024-07-15 22:05:05.902851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.811 qpair failed and we were unable to recover it. 00:27:11.811 [2024-07-15 22:05:05.903099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.903134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.903350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.903382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.903635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.903665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.903839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.903869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.904089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.904118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.904415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.904429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.904612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.904626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.904745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.904758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.904950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.904964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.905158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.905172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.905450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.905464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.905657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.905670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.905873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.905886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.906069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.906083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.906366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.906398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.906634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.906664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.906838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.906868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.907109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.907139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.907362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.907393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.907625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.907655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.907836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.907866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.908093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.908122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.908353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.908367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.908567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.908580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.908722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.908736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.908956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.908986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.909222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.909258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.909433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.909463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.909647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.909660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.909787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.909800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.910109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.910138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.910368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.910399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.910574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.910604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.910890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.812 [2024-07-15 22:05:05.910920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.812 [2024-07-15 22:05:05.911076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.911105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.911307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.911642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.911672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.911846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.911876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.912090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.912119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.912447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.912461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.912708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.912722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.912947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.912961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.913207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.913220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.913424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.913438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.913658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.913687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.913919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.913949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.914204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.914250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.914509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.914539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.914848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.914878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.915125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.915155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.915472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.915486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.915667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.915680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.915928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.915942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.916170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.916200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.916362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.916392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.916566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.916597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.916823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.916852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.917058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.917089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.917301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.917315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.917536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.917550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.917741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.917754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.917899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.917913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.918063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.918076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.918273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.918287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.918544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.918558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.918688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.918702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.918886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.813 [2024-07-15 22:05:05.918916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-07-15 22:05:05.919081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.919110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.919339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.919375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.919599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.919613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.919895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.919908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.920105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.920119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.920398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.920412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.920561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.920574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.920778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.920791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.920992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.921006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.921270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.921284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.921552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.921581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.921875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.921905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.922212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.922324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.922590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.922619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.922840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.922869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.923385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.923423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.923665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.923680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.923825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.923839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.924120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.924150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.924462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.924494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.924798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.924829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.925084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.925114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.925280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.925311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.925489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.925519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.925743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.925756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.926037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.926051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.926254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.926271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.926414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.926427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.926624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.926640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.926912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.926925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.927048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.927062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.927266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.927296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.927602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.927631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.927856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.927886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.928112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.928142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.928368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.928399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.928612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.928626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-07-15 22:05:05.928765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.814 [2024-07-15 22:05:05.928779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.928957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.928988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.929235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.929266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.929548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.929578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.929850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.929863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.930083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.930097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.930292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.930306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.930518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.930548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.930841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.930871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.931178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.931208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.931429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.931443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.931710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.931723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.931906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.932048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.932062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.932262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.932276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.932467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.932481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.932621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.932635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.932822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.932859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.933164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.933193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.933459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.933490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.933727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.933757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.933994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.934024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.934334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.934366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.934677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.934707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.934991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.935021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.935257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.935288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.935547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.935577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.935811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.935841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.935994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.936024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.936304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.936334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.936481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.936511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.936755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.936768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.936981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.937182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.937196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-07-15 22:05:05.937328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.815 [2024-07-15 22:05:05.937343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.937475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.937489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.937740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.937753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.937877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.937891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.938111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.938125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.938372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.938403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.938571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.938600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.938819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.938849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.939134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.939165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.939399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.939429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.939735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.939750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.939939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.939953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.940137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.940151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.940427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.940459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.940590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.940620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.940924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.940953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.941189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.941219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.941542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.941573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.941753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.941782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.942016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.942046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.942174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.942187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.942472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.942828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.942858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.943167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.943208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.943411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.943425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.943615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.943632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.943818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.816 [2024-07-15 22:05:05.943848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-07-15 22:05:05.944131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.944161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.944459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.944491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.944742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.944772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.945106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.945136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.945391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.945421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.945603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.945617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.945880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.945910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.946221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.946266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.946430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.946585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.946598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.946846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.946860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.947007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.947021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.947115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.947128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.947318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.947333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.947526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.947556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.947725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.947755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.948035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.948302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.948333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.948527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.948557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.948796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.948809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.949027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.949040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.949189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.949203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.949462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.949492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.949713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.949743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.950054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.950084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.950266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.950302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.950558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.950588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.950766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.950796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.950955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.950985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.951202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.951239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.951527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.951557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.951768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.951798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.952031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.952061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.952250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.952281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.952571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.817 [2024-07-15 22:05:05.952584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-07-15 22:05:05.952798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.952812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.953006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.953019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.953216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.953257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.953494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.953523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.953758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.953788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.954900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.954913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.955161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.955175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.955361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.955376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.955506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.955520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.955729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.955759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.955985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.956015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.956244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.956275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.956494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.956529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.956743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.956757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.957010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.957023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.957308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.957322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.957590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.957604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.957776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.957790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.957936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.957968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.958218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.958257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.958567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.958597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.958887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.958917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.959089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.959119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.959265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.959560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.959573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.959794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.959808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.959938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.959952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.960146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.960190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.960351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.960381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.960550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.818 [2024-07-15 22:05:05.960580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-07-15 22:05:05.960792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.960806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.961027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.961041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.961260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.961274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.961353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.961366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.961513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.961526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.961740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.961770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.962052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.962082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.962266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.962297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.962669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.962699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.963003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.963033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.963341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.963589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.963602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.963716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.963730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.963977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.963991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.964184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.964197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.964446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.964460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.964670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.964683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.964874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.964888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.965005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.965019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.965268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.965282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.965490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.965749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.965763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.965891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.965905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.966155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.966169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.966301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.966315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.966595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.966624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.966806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.966835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.967057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.967087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.967338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.967368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.967618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.967648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.967766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.967779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.967960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.967973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.968114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.968128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.968374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.968388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.968514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.968528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.968802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.968831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-07-15 22:05:05.968997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.819 [2024-07-15 22:05:05.969027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.969261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.969291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.969453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.969466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.969724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.969754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.969991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.970020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.970200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.970261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.970531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.970545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.970819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.970832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.970926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.970938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.971064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.971078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.971277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.971291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.971563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.971577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.971774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.972035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.972343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.972361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.972481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.972494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.972622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.972636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.972850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.972863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.973056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.973069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.973267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.973280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.973485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.973499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.973642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.973656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.973871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.973901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.974115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.974145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.974322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.974353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.974635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.974648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.974860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.974965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.974978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.975108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.975121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.975313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.975327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.975602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.975632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.975914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.975944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.976165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.976194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.976499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.820 [2024-07-15 22:05:05.976529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-07-15 22:05:05.976800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.976814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.976982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.976996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.977245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.977258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.977477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.977508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.977808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.977838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.977995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.978026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.978266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.978298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.978539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.978574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.978699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.978713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.978912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.978926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.979120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.979134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.979264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.979454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.979484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.979707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.979720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.979930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.979960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.980131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.980160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.980356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.980370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.980574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.980604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.980786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.980816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.981070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.981100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.981344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.981375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.981575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.981589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.981676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.981689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.981882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.981896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.982085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.982099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.982373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.982404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.982668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.982698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.982862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.982893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.983105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.983118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.983305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.983319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.983525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.983539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-07-15 22:05:05.983817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.821 [2024-07-15 22:05:05.983848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.984152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.984182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.984425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.984457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.984761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.984791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.985040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.985176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.985456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.985609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.985753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.985989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.986003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.986198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.986211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.986438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.986452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.986667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.986696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.986928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.986958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.987125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.987154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.987449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.987480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.987763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.987793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.987957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.987987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.988156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.988186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.988440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.988471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.988684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.988697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.988968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.988981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.989178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.989191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.989384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.989398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.989515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.989775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.989789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.989925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.989939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.990141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.990170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.990463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.990493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.990736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.990766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.990992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.991022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.991196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.991237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.991456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.991487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.991714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.992065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.992093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.992423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.992452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-07-15 22:05:05.992624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.822 [2024-07-15 22:05:05.992652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.992865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.992878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.993074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.993086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.993301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.993430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.993443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.993624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.993887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.993900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.994970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.994982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.995181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.995194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.995321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.995334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.995524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.995537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.995666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.995678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.995856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.996793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.996806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.997008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.997021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.997199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.997212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.997487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.997500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.997692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.997705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.997957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.997970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.998218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.998236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.998470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.998483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.998699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.998729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.998952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.998982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.999290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.999320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.999611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.999625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.823 qpair failed and we were unable to recover it. 00:27:11.823 [2024-07-15 22:05:05.999759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.823 [2024-07-15 22:05:05.999774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:05.999990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.000020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.000274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.000305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.000562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.000592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.000828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.000842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.001094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.001108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.001311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.001325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.001596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.001610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.001741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.001754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.002848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.002997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.003026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.003181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.003211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.003517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.003530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.003726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.003976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.004006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.004155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.004185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.004422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.004452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.004601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.004642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.004887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.004900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.005086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.005099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.005271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.005302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.005536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.005565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.005794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.005829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.006033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.006046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.006302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.006316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.006595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.006608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.006796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.006809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.007084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.824 [2024-07-15 22:05:06.007251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.824 [2024-07-15 22:05:06.007265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.824 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.007516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.007529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.007736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.007766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.007928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.007958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.008142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.008171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.008484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.008514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.008749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.008762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.008879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.008892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.009148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.009182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.009426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.009494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.009690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.009700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.009892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.009902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.010217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.010231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.010355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.010366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.010549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.010559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.010681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.010691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.010820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.010830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.011932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.011941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.012139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.012149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.012289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.012299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.012416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.012426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.012528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.012537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:11.825 [2024-07-15 22:05:06.012725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.825 [2024-07-15 22:05:06.012735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:11.825 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.012918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.012949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.013121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.013314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.013345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.013518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.013528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.013708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.013732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.013956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.111 [2024-07-15 22:05:06.013986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.111 qpair failed and we were unable to recover it. 00:27:12.111 [2024-07-15 22:05:06.014157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.014454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.014485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.014666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.014696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.014971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.015000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.015279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.015310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.015637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.015647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.015838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.015848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.015961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.015971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.016977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.016986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.017191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.017207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.017360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.017374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.017578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.017591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.017738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.017752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.017875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.017889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.018871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.018885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.019025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.019038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.019247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.019260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.019464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.019477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.019606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.019619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.019818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.019831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.020034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.020048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.020306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.020321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.020448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.020462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.020666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.020696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.020978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.021007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.021181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.021210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.021397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.021427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.021674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.021703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.021875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.021889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.022012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.022026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.022297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.112 [2024-07-15 22:05:06.022311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.112 qpair failed and we were unable to recover it. 00:27:12.112 [2024-07-15 22:05:06.022562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.022579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.022773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.022787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.023039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.023052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.023242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.023256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.023478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.023491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.023699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.023729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.023976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.024006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.024254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.024285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.024440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.024470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.024703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.024733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.025918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.025947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.026116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.026146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.026433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.026463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.026608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.026622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.026883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.026913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.027090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.027338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.027376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.027578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.027591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.027771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.027784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.028036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.028066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.028221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.028257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.028473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.028503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.028728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.028743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.028956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.028970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.029132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.029145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.029285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.029299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.029443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.029456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.029640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.029653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.029787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.029800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.030016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.030029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.030148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.030161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.030433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.030464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.030746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.030775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.031024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.031053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.031334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.031365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.031653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.113 [2024-07-15 22:05:06.031666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.113 qpair failed and we were unable to recover it. 00:27:12.113 [2024-07-15 22:05:06.031892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.031905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.032155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.032168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.032301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.032456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.032728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.032741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.032879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.033075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.033088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.033337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.033351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.033552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.033566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.033765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.033778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.033891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.033904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.034169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.034183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.034308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.034323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.034626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.034642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.034839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.034852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.035032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.035046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.035172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.035186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.035365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.035378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.035587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.035600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.035871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.035884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.036005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.036019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.036156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.036169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.036426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.036439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.036578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.036591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.036837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.036850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.037043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.037057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.037256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.037287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.037568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.037598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.037881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.037911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.038065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.038095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.038376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.038407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.038642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.038671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.038971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.038984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.039119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.039132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.039317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.039331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.039587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.039617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.039779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.039819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.040045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.040075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.040219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.040258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.040503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.040533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.040872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.114 [2024-07-15 22:05:06.040902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.114 qpair failed and we were unable to recover it. 00:27:12.114 [2024-07-15 22:05:06.041261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.041290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.041537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.041551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.041741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.041754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.041887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.041901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.042171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.042184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.042387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.042401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.042591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.042605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.042821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.042851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.043067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.043097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.043337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.043368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.043567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.043580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.043818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.043847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.044068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.044098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.044277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.044309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.044481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.044511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.044731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.044760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.044930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.044960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.045111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.045140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.045305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.045335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.045619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.045649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.045784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.045814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.046028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.046041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.046219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.046238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.046488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.046518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.046743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.046773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.046990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.047020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.047200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.047239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.047415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.047444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.047690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.047720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.048003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.048033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.048263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.048294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.048534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.048564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.048802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.048832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.049061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.049091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.049313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.049343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.049578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.049607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.049902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.049916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.050118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.050132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.050327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.050340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.050478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.050491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.050626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.115 [2024-07-15 22:05:06.050642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.115 qpair failed and we were unable to recover it. 00:27:12.115 [2024-07-15 22:05:06.050724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.050737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.050867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.050881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.051085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.051099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.051397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.051435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.051669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.051699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.051929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.051958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.052174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.052204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.052524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.052555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.052776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.052805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.053081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.053095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.053344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.053358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.053483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.053497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.053635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.053649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.053843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.053856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.054105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.054118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.054301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.054315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.054529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.054559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.054776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.054806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.055029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.055059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.055349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.055612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.055641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.055776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.055789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.055987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.056281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.056295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.056500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.056514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.056708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.056722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.056863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.056879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.057018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.057031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.057237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.057251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.057416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.057445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.057625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.057654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.057972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.058001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.058320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.058351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.058665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.058694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.116 qpair failed and we were unable to recover it. 00:27:12.116 [2024-07-15 22:05:06.058922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.116 [2024-07-15 22:05:06.058951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.059124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.059154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.059321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.059350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.059631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.059644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.059841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.059855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.060052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.060066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.060353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.060366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.060560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.060574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.060820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.060833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.061024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.061037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.061239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.061269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.061438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.061467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.061716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.061746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.061909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.061922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.062071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.062084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.062370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.062384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.062538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.062551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.062756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.062936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.062967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.063203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.063242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.063477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.063507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.063685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.063714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.063905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.063935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.064264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.064278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.064467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.064481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.064598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.064611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.064750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.064764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.065863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.065877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.066021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.066034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.066238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.066252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.066450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.066464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.066587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.066600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.066797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.067000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.067014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.067295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.067309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.067438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.117 [2024-07-15 22:05:06.067452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.117 qpair failed and we were unable to recover it. 00:27:12.117 [2024-07-15 22:05:06.067662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.067676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.067929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.067942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.068136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.068149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.068299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.068313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.068439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.068452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.068630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.068643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.068860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.068874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.069926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.069940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.070075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.070089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.070277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.070291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.070417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.070431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.070617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.070630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.070878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.070892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.071954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.071968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.072101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.072327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.072458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.072695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.072902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.072991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.073134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.073342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.073500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.073701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.073922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.073935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.074118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.074132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.074314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.074328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.074606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.074744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.118 [2024-07-15 22:05:06.074758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.118 qpair failed and we were unable to recover it. 00:27:12.118 [2024-07-15 22:05:06.075002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.075211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.075434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.075584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.075728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.075864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.075878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.076015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.076029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.076234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.076250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.076436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.076450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.076655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.076668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.076869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.076883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.077963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.077976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.078102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.078322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.078337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.078560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.078574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.078767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.078781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.079851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.079865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.080875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.080889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.081809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.081987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.082001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.082205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.119 [2024-07-15 22:05:06.082218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.119 qpair failed and we were unable to recover it. 00:27:12.119 [2024-07-15 22:05:06.082445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.082459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.082708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.082721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.082807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.082819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.083020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.083034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.083229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.083242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.083373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.083386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.083649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.083662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.083857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.083871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.084941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.084955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.085160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.085173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.085366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.085380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.085495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.085508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.085648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.085662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.085848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.085862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.086047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.086060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.086318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.086331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.086523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.086536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.086749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.086762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.086962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.086975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.087203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.087216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.087364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.087379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.087517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.087531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.087752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.087765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.087910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.087923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.088202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.088216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.088354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.088597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.088611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.088706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.088719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.088847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.088861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.089011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.089045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.120 qpair failed and we were unable to recover it. 00:27:12.120 [2024-07-15 22:05:06.089186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.120 [2024-07-15 22:05:06.089212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.089397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.089408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.089587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.089598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.089716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.089726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.089856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.089866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.090979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.090989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.091944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.091954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.092894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.092904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.093908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.093918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.094185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.094322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.094517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.094706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.094890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.094999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.095009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.095183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.095193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.095408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.095418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.095598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.095608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.095800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.095810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.096003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.121 [2024-07-15 22:05:06.096013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.121 qpair failed and we were unable to recover it. 00:27:12.121 [2024-07-15 22:05:06.096149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.096280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.096476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.096606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.096749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.096943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.096953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.097136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.097145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.097332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.097342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.097523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.097533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.097725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.097734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.097923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.097933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.098778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.098787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.099977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.099987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.100976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.100985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.101099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.101109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.101281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.101291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.101554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.101564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.101655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.101664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.101837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.101847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.102898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.102908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.122 [2024-07-15 22:05:06.103078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.122 [2024-07-15 22:05:06.103088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.122 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.103275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.103285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.103406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.103416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.103527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.103537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.103825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.103835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.103951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.103961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.104948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.104957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.105947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.105957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.106147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.106157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.106339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.106350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.106481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.106490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.106679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.106689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.106888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.106898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.107861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.107991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.108243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.108444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.108575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.108783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.108974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.108984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.109179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.109379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.109409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.109644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.109678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.109893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.109923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.110212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.123 [2024-07-15 22:05:06.110221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.123 qpair failed and we were unable to recover it. 00:27:12.123 [2024-07-15 22:05:06.110469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.110479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.110597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.110608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.110804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.110814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.111041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.111070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.111238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.111269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.111486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.111516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.111738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.111748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.111920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.111929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.112119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.112149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.112451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.112482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.112812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.112842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.113021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.113050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.113270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.113300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.113528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.113557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.113718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.113746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.114042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.114051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.114233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.114243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.114403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.114432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.114608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.114638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.114947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.114976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.115140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.115168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.115420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.115450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.115729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.115739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.115922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.115932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.116055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.116064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.116248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.116258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.116454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.116463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.116589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.116599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.116790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.116800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.117046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.117075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.117243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.117273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.117399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.117428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.117647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.117676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.117835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.117864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.118071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.118081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.124 [2024-07-15 22:05:06.118287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.124 [2024-07-15 22:05:06.118297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.124 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.118424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.118433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.118620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.118632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.118741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.118751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.118941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.118951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.119133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.119162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.119334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.119364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.119530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.119559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.119774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.119784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.120026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.120139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.120274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.120509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.120727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.120997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.121126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.121264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.121449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.121661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.121814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.121823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.122934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.122944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.123121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.123130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.123386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.123415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.123640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.123670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.123947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.123957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.124159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.124169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.124358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.124368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.124589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.124619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.124810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.124840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.125003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.125031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.125279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.125289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.125416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.125426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.125683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.125712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.125946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.125976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.126286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.126316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.126549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.126578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.125 qpair failed and we were unable to recover it. 00:27:12.125 [2024-07-15 22:05:06.126861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.125 [2024-07-15 22:05:06.126890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.127835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.127845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.128815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.128825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.129974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.129985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.130896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.130999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.131257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.131378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.131577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.131788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.131906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.131915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.132974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.132983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.126 [2024-07-15 22:05:06.133190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.126 [2024-07-15 22:05:06.133200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.126 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.133374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.133385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.133486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.133494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.133715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.133727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.133940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.133950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.134987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.134997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.135898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.135908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.136970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.136980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.137182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.137191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.137373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.137383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.137651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.137661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.137777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.137787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.137980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.137990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.138102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.138112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.138321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.138331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.138519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.138529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.138704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.138736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.138958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.138987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.139274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.139303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.139616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.139644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.139872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.139881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.140121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.140130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.140344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.140354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.140532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.140542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.140721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.127 [2024-07-15 22:05:06.140731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.127 qpair failed and we were unable to recover it. 00:27:12.127 [2024-07-15 22:05:06.140861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.140871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.141069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.141098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.141334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.141364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.141588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.141622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.141786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.141815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.142096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.142125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.142339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.142349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.142548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.142559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.142841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.142851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.143122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.143131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.143394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.143404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.143630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.143639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.143751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.143760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.144027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.144037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.144157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.144167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.144354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.144365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.144558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.144830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.144859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.145085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.145114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.145264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.145274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.145483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.145493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.145665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.145674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.145848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.145858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.146058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.146088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.146312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.146342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.146572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.146601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.146769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.146779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.146955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.146997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.147331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.147361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.147536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.147566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.147738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.147767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.148082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.148112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.148423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.148454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.148756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.148785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.148944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.148974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.149194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.149203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.149384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.149395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.149604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.149633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.149853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.128 [2024-07-15 22:05:06.149883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.128 qpair failed and we were unable to recover it. 00:27:12.128 [2024-07-15 22:05:06.150142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.150171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.150358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.150388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.150558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.150586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.150809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.151061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.151095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.151323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.151333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.151507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.151517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.151789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.151818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.152003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.152033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.152250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.152280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.152499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.152529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.152744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.152773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.153010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.153039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.153322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.153352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.153609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.153638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.153933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.153962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.154122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.154151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.154324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.154334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.154531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.154540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.154659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.154669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.154863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.154872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.155113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.155122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.155309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.155319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.155453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.155463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.155586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.155595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.155834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.155843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.156023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.156033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.156268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.156298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.156475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.156504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.156753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.156762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.156939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.156949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.157062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.157073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.157285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.157296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.157475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.157484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.157594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.129 [2024-07-15 22:05:06.157606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.129 qpair failed and we were unable to recover it. 00:27:12.129 [2024-07-15 22:05:06.157790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.157800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.157924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.157934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.158945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.158955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.159128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.159138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.159352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.159364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.159622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.159632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.159815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.159824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.159947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.159957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.160081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.160091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.160268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.160553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.160582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.160764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.160794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.160949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.160978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.161261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.161271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.161404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.161534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.161544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.161753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.161763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.161977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.161986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.162221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.162258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.162416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.162445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.162610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.162639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.162870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.162900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.163921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.163930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.164111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.164122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.164259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.164269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.164454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.164464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.164560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e050 is same with the state(5) to be set 00:27:12.130 [2024-07-15 22:05:06.164741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.164777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.164980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.164994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.165194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.165209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.165394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.130 [2024-07-15 22:05:06.165409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.130 qpair failed and we were unable to recover it. 00:27:12.130 [2024-07-15 22:05:06.165649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.165679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.165850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.165880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.166051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.166081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.166390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.166404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.166543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.166557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.166687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.166701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.166959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.166972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.167189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.167202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.167331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.167345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.167589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.167612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.167825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.167840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.168149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.168165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.168370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.168384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.168524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.168556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.168826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.168856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.169108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.169122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.169298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.169314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.169513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.169528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.169620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.169634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.169911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.169941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.170121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.170151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.170372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.170403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.170576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.170614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.170859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.170873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.171121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.171135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.171259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.171274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.171413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.171426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.171611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.171642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.171879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.171908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.172927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.172941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.173141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.173155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.173298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.173314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.173442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.173723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.173734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.173969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.173980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.131 [2024-07-15 22:05:06.174104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.131 [2024-07-15 22:05:06.174113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.131 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.174294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.174490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.174500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.174624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.174634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.174740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.174749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.175006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.175016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.175227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.175238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.175416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.175426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.175604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.175613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.175908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.175919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.176956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.176966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.177074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.177084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.177202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.177213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.177394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.177405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.177615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.177626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.177866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.177875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.178874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.178884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.179955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.179965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.180924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.180933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.132 qpair failed and we were unable to recover it. 00:27:12.132 [2024-07-15 22:05:06.181041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.132 [2024-07-15 22:05:06.181050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.181298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.181328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.181489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.181519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.181757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.181786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.181946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.181975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.182192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.182221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.182391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.182422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.182701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.182730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.182890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.182906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.183098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.183364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.183394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.183558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.183588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.183736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.183766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.184098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.184128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.184275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.184305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.184480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.184510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.184824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.184864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.185005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.185019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.185243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.185274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.185511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.185702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.185732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.185885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.185924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.186052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.186065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.186205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.186219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.186358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.186372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.186509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.186523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.186804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.186834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.187055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.187085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.187303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.187333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.187482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.187512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.187684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.187714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.187998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.188028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.188194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.188236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.188525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.188842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.188872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.133 [2024-07-15 22:05:06.189161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.133 [2024-07-15 22:05:06.189191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.133 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.189499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.189531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.189772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.189802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.190040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.190070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.190231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.190261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.190516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.190547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.190865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.190894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.191929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.191941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.192825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.192838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.193030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.193059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.193205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.193244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.193411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.193440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.193586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.193615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.193829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.193859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.194172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.194200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.194374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.194404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.194637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.194667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.194929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.194959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.195129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.195139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.195385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.195396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.195591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.195621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.195769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.195798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.196025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.196054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.196334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.196344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.196476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.196486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.196729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.196758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.196918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.196947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.197257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.197292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.197475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.197485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.197696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.197725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.197962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.197992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.134 qpair failed and we were unable to recover it. 00:27:12.134 [2024-07-15 22:05:06.198222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.134 [2024-07-15 22:05:06.198258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.198402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.198412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.198546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.198556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.198680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.198861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.198870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.199899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.199908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.200948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.200959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.201129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.201139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.201379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.201389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.201580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.201590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.201777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.201787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.201895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.201905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.202888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.202897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.203960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.203970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.204159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.204185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.204481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.204512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.204735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.204764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.204926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.204935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.205152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.205182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.205453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.205483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.205647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.135 [2024-07-15 22:05:06.205677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.135 qpair failed and we were unable to recover it. 00:27:12.135 [2024-07-15 22:05:06.205884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.206875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.206884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.207127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.207156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.207371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.207401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.207686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.207716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.207949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.207978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.208306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.208318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.208509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.208537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.208844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.208874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.209040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.209050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.209221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.209265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.209490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.209519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.209694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.209722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.210029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.210058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.210281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.210311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.210547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.210576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.210750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.210779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.211977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.211987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.212171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.212197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.212386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.212420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.212641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.212671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.212845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.212859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.213135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.213165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.213421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.213453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.213639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.213669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.213820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.213850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.214108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.214139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.214364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.214378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.214535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.214549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.214821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.214851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.136 [2024-07-15 22:05:06.215103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.136 [2024-07-15 22:05:06.215133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.136 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.215438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.215654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.215668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.215854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.215868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.216096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.216127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.216428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.216689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.216719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.216952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.216983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.217157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.217186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.217434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.217465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.217631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.217661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.217898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.217932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.218168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.218199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.218454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.218487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.218677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.218707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.219009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.219038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.219260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.219270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.219473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.219483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.219675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.219684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.219950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.219978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.220202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.220239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.220405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.220434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.220660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.220690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.220961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.220990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.221235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.221245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.221464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.221474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.221679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.221688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.221823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.221952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.221962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.222133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.222143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.222380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.222390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.222516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.222525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.222720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.222873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.222883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.223071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.223100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.223234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.137 [2024-07-15 22:05:06.223264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.137 qpair failed and we were unable to recover it. 00:27:12.137 [2024-07-15 22:05:06.223495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.223524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.223853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.223882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.224139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.224168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.224424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.224454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.224689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.224719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.224928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.224957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.225143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.225153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.225285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.225295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.225423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.225433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.225608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.225617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.225870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.225899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.226188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.226217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.226447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.226457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.226714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.226724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.226900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.226909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.227185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.227198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.227324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.227334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.227536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.227546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.227834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.227843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.227966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.227975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.228214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.228227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.228352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.228362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.228576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.228585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.228771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.228781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.228909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.228918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.229942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.229951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.230072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.230082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.230288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.230299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.230427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.230437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.230623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.230633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.230878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.230888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.231023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.231033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.231107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.231116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.231299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.231309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.138 [2024-07-15 22:05:06.231524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-15 22:05:06.231533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.138 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.231752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.231762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.231959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.231969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.232987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.232996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.233979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.233990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.234233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.234243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.234516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.234545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.234714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.234743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.234963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.234993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.235147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.235157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.235274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.235473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.235483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.235673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.235683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.235866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.235894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.236980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.237908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.237917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.238038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.238146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.238154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.238349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.139 [2024-07-15 22:05:06.238359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-07-15 22:05:06.238542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.238552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.238684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.238693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.238826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.238837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.239926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.239935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.240151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.240161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.240286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.240296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.240429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.240438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.240618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.240629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.240883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.240895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.241110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.241119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.241297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.241307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.241429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.241632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.241643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.241819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.241828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.242814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.242823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.243014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.243024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.243196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.243206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.243425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.243435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.243622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.243631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.243825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.243853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.244038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.244066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.244364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.244395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.244531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.244560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.244818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.244847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.245129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.140 [2024-07-15 22:05:06.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-07-15 22:05:06.245454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.245465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.245727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.245737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.245843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.245852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.246864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.246875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.247979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.247988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.248168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.248178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.248359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.248371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.248606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.248617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.248799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.248808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.248919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.249111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.249120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.249340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.249350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.249468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.249479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.249661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.249671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.249864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.249874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.250928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.251188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.251198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.251415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.251444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.251742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.251772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.252010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.252039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.252207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.252272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.252437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.252466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.252604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.252613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-07-15 22:05:06.252857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.141 [2024-07-15 22:05:06.252867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.253844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.253855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.254032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.254042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.254270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.254299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.254469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.254497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.254714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.254742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.254945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.254974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.255228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.255238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.255445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.255454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.255636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.255646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.255891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.255901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.256975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.257964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.257974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.258156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.258166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.142 [2024-07-15 22:05:06.258287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.142 [2024-07-15 22:05:06.258297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.142 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.258428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.258438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.258575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.258584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.258749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.258759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.258825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.258833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.258942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.258952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.259809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.259821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.260873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.260883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.261070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.261099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.261333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.261535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.261565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.261849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.261878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.261999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.262028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.262190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.262218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.262351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.262381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.262540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.262550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.262738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.262766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.263037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.263065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.263211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.263221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.263397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.263410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.263602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.263612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.263802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.263812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.264059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.264088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.264265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.264295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.264553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.264582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.264813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.264842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.265008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.265037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.265250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.265279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.265556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.265585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.265816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.265845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.266152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.143 [2024-07-15 22:05:06.266182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.143 qpair failed and we were unable to recover it. 00:27:12.143 [2024-07-15 22:05:06.266342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.266372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.266599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.266628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.266870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.266899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.267185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.267214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.267383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.267413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.267726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.267754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.267919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.267948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.268271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.268302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.268524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.268553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.268839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.268868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.269191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.269201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.269377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.269644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.269654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.269885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.269914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.270200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.270236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.270463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.270473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.270651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.270661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.270863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.270893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.271174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.271203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.271451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.271480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.271649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.271677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.271910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.271940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.272207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.272217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.272413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.272424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.272617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.272646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.272966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.272995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.273148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.273177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.273407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.273437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.273593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.273626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.273778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.273806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.274026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.274055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.274273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.274304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.274459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.274494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.274716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.274725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.274815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.144 [2024-07-15 22:05:06.274824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.144 qpair failed and we were unable to recover it. 00:27:12.144 [2024-07-15 22:05:06.275081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.275091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.275283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.275293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.275479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.275517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.275818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.275847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.276015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.276044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.276212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.276222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.276420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.276449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.276739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.276770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.277097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.277126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.277352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.277362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.277564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.277594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.277836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.277865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.278088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.278118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.278296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.278326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.278543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.278573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.278857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.278886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.279166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.279195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.279437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.279473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.279655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.279664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.279843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.279872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.280122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.280151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.280456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.280487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.280774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.280803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.281017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.281047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.281271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.281282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.281480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.281490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.281599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.281609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.281796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.281807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.282922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.282934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.145 [2024-07-15 22:05:06.283972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.145 [2024-07-15 22:05:06.283983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.145 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.284230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.284273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.284453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.284482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.284637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.284666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.284834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.284864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.285921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.285931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.286932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.286941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.287125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.287135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.287259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.287269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.287509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.287519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.287656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.287666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.287858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.287867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.288166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.288196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.288488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.288557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.288870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.288937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.289334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.289387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.289541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.289552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.289817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.289827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.289917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.289926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.290165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.146 [2024-07-15 22:05:06.290175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.146 qpair failed and we were unable to recover it. 00:27:12.146 [2024-07-15 22:05:06.290415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.290425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.290634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.290643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.290840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.290869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.291019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.291049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.291289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.291318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.291590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.291600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.291846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.291856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.292887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.292896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.293067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.293076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.293265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.293275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.293446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.293456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.293647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.293657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.293922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.293932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.294222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.294240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.294372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.294579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.294589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.294778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.294787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.294972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.295002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.295220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.295259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.295495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.295524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.295762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.295791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.295989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.296234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.296504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.296635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.296730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.296927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.296937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.297111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.297121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.297359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.297369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.297552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.297562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.297747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.147 [2024-07-15 22:05:06.297776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.147 qpair failed and we were unable to recover it. 00:27:12.147 [2024-07-15 22:05:06.297957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.297986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.298269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.298299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.298442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.298452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.298696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.298725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.298950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.298979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.299213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.299251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.299480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.299510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.299800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.299830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.300056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.300085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.300361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.300371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.300565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.300574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.300691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.300701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.300950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.300960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.301079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.301089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.301277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.301286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.301474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.301484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.301739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.301769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.301897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.301926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.302159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.302188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.302485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.302495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.302735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.302986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.302996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.303129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.303139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.303314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.303323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.303517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.303526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.303795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.303825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.304059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.304089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.304291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.304324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.304500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.304530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.304815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.304844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.305126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.305155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.305430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.305710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.305970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.306000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.306210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.306221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.306407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.306639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.306669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.306847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.306876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.307041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.307070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.307222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.307260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.148 qpair failed and we were unable to recover it. 00:27:12.148 [2024-07-15 22:05:06.307441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.148 [2024-07-15 22:05:06.307471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.307647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.307783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.307793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.308866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.308876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.309003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.309013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.309210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.309220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.309411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.309421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.309624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.309633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.309894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.309904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.310023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.310033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.310138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.310148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.310350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.310361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.310592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.310621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.310791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.310820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.311048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.311077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.311220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.311233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.311451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.311481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.311816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.311846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.312083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.312112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.312288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.312318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.312491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.312521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.312842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.312871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.313155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.313184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.313494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.313525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.313805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.313834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.314014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.314043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.314319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.314328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.314510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.314520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.314653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.314663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.314914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.314936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.315060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.315071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.315311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.315321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.315444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.315454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.315686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.315696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.315874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.315883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.149 qpair failed and we were unable to recover it. 00:27:12.149 [2024-07-15 22:05:06.316050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.149 [2024-07-15 22:05:06.316060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.316234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.316244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.316505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.316711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.316720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.316850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.316976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.316986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.317188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.317391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.317401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.317651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.317681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.317899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.317928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.318151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.318180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.318401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.318431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.318586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.318596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.318842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.318870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.319121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.319150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.319317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.319346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.319574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.319603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.319788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.319817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.320123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.320152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.320379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.320389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.320511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.320521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.320718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.320728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.320964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.320974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.321257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.321267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.321403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.321413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.321594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.321635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.321901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.321931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.322166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.322195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.322385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.322395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.322599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.322609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.322785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.322795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.323048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.323057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.323297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.323307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.323574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.323584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.323703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.323714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.323831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.323840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.324032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.324043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.324258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.324267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.324450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.324460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.324708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.324737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.324901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.324930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.325189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.325218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.150 [2024-07-15 22:05:06.325420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.150 [2024-07-15 22:05:06.325450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.150 qpair failed and we were unable to recover it. 00:27:12.442 [2024-07-15 22:05:06.325799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.442 [2024-07-15 22:05:06.325829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.442 qpair failed and we were unable to recover it. 00:27:12.442 [2024-07-15 22:05:06.326147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.326359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.326402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.326648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.326657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.326846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.326856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.326977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.326987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.327229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.327239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.327458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.327468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.327658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.327668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.327854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.327863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.328040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.328050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.328167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.328362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.328378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.328568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.328578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.328839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.328848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.329944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.329954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.330173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.330203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.330392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.330422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.330637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.330975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.331004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.331242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.331272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.331441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.331470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.331780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.331809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.332030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.332060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.332295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.332326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.332581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.332611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.332790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.332824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.333072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.333101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.333323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.333334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.333507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.443 [2024-07-15 22:05:06.333517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.443 qpair failed and we were unable to recover it. 00:27:12.443 [2024-07-15 22:05:06.333635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.333645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.333882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.333892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.334086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.334096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.334284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.334294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.334535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.334563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.334784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.334813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.335099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.335128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.335354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.335364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.335602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.335612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.335806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.335816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.335928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.335937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.336110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.336119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.336233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.336246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.336366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.336375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.336554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.336564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.336763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.336773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.337000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.337030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.337193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.337222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.337539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.337568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.337718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.337747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.338104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.338132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.338418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.338448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.338616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.338655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.338949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.338958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.339135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.339144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.339336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.339346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.339529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.339557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.339913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.339942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.340156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.340186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.340414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.340443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.340611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.340640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.340824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.340853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.341082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.341111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.341264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.341274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.341416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.444 [2024-07-15 22:05:06.341426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.444 qpair failed and we were unable to recover it. 00:27:12.444 [2024-07-15 22:05:06.341614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.341624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.341862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.342074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.342104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.342336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.342366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.342601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.342631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.342860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.342889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.343103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.343132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.343346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.343356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.343546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.343556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.343746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.343756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.343937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.343966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.344128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.344157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.344462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.344491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.344658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.344687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.344848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.344877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.345141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.345171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.345476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.345486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.345746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.345756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.345876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.345886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.346873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.346883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.347057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.347066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.347192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.347201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.347403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.347413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.347623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.347633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.347753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.347763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.348003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.348013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.348251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.348261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.348371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.348381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.445 qpair failed and we were unable to recover it. 00:27:12.445 [2024-07-15 22:05:06.348505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.445 [2024-07-15 22:05:06.348515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.348753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.348763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.349919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.349928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.350953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.350963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.351142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.351181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.351352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.351381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.351595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.351625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.351846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.351856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.352106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.352254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.352448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.352620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.352813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.352989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.353200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.353369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.353507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.353704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.353905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.353914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.354179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.354189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.354315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.354325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.354450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.354460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.354639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.354649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.446 [2024-07-15 22:05:06.354786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.446 [2024-07-15 22:05:06.354809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.446 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.354983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.355011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.355240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.355310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.355622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.355654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.355866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.355881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.356012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.356026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.356295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.356328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.356544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.356573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.356821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.356850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.357140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.357424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.357460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.357653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.357860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.357873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.358027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.358040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.358287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.358318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.358502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.358531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.358739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.358769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.359078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.359107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.359343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.359373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.359526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.359539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.359788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.359801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.360981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.360994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.361139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.361153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.361352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.361365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.361635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.361668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.361844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.361874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.362090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.362119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.362287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.447 [2024-07-15 22:05:06.362317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.447 qpair failed and we were unable to recover it. 00:27:12.447 [2024-07-15 22:05:06.362564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.362593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.362752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.362762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.362983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.363013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.363175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.363205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.363407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.363437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.363714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.363723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.363834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.363844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.364086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.364096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.364317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.364347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.364630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.364665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.364864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.364893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.365127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.365157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.365324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.365353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.365630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.365639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.365768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.365778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.365922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.365931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.366173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.366183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.366363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.366374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.366504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.366514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.366697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.366707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.366831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.366841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.367040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.367069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.367304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.367333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.367577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.367606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.367844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.367873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.368110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.368375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.368405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.368686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.368716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.368882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.368912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.448 qpair failed and we were unable to recover it. 00:27:12.448 [2024-07-15 22:05:06.369075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.448 [2024-07-15 22:05:06.369104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.369319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.369329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.369578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.369607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.369833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.369862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.370014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.370043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.370370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.370400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.370614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.370644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.370888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.370928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.371165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.371196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.371515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.371531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.371736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.371747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.371886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.371896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.372823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.372833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.373021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.373031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.373270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.373280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.373481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.373511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.373695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.373725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.373885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.373914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.374175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.374205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.374433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.374444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.374691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.374701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.374822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.375073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.375083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.375174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.375182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.375366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.375376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.375568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.375597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.449 [2024-07-15 22:05:06.375753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.449 [2024-07-15 22:05:06.375783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.449 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.376068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.376096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.376422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.376432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.376554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.376565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.376744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.376754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.376939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.376949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.377103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.377123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.377311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.377341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.377564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.377594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.377879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.377908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.378137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.378166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.378379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.378409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.378631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.378661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.378977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.378986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.379147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.379157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.379406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.379417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.379559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.379570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.379772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.379781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.379957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.379978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.380106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.380116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.380316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.380326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.380515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.380525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.380787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.380816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.380977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.381006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.381237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.381268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.381509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.381538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.381821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.381851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.382096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.382125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.382354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.382384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.382693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.382723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.382949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.382978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.383212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.383252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.383555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.383564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.383824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.383834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.383967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.383977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.384148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.450 [2024-07-15 22:05:06.384184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.450 qpair failed and we were unable to recover it. 00:27:12.450 [2024-07-15 22:05:06.384431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.384460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.384789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.384818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.384984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.385013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.385347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.385357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.385598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.385608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.385895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.385905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.386088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.386097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.386221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.386240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.386525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.386535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.386720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.386730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.386940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.386950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.387247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.387445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.387576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.387695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.387839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.387994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.388005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.388268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.388279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.388471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.388481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.388672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.388701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.388984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.389018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.389245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.389275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.389446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.389456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.389705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.389735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.389888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.389917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.390149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.390178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.390370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.390408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.390546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.390556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.390732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.390741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.390972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.391001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.391168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.391197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.391435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.391465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.394514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.394547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.394852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.394862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.451 [2024-07-15 22:05:06.395089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.451 [2024-07-15 22:05:06.395119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.451 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.395400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.395429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.395659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.395688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.395925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.395955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.396242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.396429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.396459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.396691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.396720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.396936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.396945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.397136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.397146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.397286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.397295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.397477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.397487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.397682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.397711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.397838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.397867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.398180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.398209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.398397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.398427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.398565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.398574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.398760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.398769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.398884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.398894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.399079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.399089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.399288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.399319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.399540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.399569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.399800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.399829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.400140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.400170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.400482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.400512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.400729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.400758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.400887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.400916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.401233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.401268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.401449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.401477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.401706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.401736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.401913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.401923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.402039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.402049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.402262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.402272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.402390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.402400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.452 qpair failed and we were unable to recover it. 00:27:12.452 [2024-07-15 22:05:06.402583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.452 [2024-07-15 22:05:06.402593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.402781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.402791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.403889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.403899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.404107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.404116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.404231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.404241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.404424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.404434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.404639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.404668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.404904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.404934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.405214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.405255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.405474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.405503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.405736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.405765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.406031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.406041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.406231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.406241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.406432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.406441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.406652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.406682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.406861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.406890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.407191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.407220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.407475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.407504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.407725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.407754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.407983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.408012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.408325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.408355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.408660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.408690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.408855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.408884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.453 [2024-07-15 22:05:06.409054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.453 [2024-07-15 22:05:06.409083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.453 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.409391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.409421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.409650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.409660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.409851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.409861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.410022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.410031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.410238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.410273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.410505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.410535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.410783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.411144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.411174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.411401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.411430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.411597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.411626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.411849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.411879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.412036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.412065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.412348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.412379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.412554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.412584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.412814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.413075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.413104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.413358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.413387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.413617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.413647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.413852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.413881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.414096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.414125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.414378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.414408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.414656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.414685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.415007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.415017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.415248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.415279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.415517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.415546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.415770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.415779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.415954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.415963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.416174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.416184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.416397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.416407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.416648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.416677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.416925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.416954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.417269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.417299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.417550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.417580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.417849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.417869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.454 [2024-07-15 22:05:06.418139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.454 [2024-07-15 22:05:06.418149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.454 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.418396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.418406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.418513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.418523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.418778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.418787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.419012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.419041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.419272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.419303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.419522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.419551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.419854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.419864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.419984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.419994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.420143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.420153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.420327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.420339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.420478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.420488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.420725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.420735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.420984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.420994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.421183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.421192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.421323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.421334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.421602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.421611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.421795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.421806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.421982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.421992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.422196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.422232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.422398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.422428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.422646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.422675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.423815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.423844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.424148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.424178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.424429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.424459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.424756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.424786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.425097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.425126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.425363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.425394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.425639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.425649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.455 [2024-07-15 22:05:06.425833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.455 [2024-07-15 22:05:06.425842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.455 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.426022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.426051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.426287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.426318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.426556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.426565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.426769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.426779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.426899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.426909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.427122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.427132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.427241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.427251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.427436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.427619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.427629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.427766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.427776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.428010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.428019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.428193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.428203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.428334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.428344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.428586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.428615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.428835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.428865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.429043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.429077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.429316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.429346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.429563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.429593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.429876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.429885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.430970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.430980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.431106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.431116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.431306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.431316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.431444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.431644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.431654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.431786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.431796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.432037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.432047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.432217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.432230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.432364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.432374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.432643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.432652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.432864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.432874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.456 qpair failed and we were unable to recover it. 00:27:12.456 [2024-07-15 22:05:06.433097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.456 [2024-07-15 22:05:06.433126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.433314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.433344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.433654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.433684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.433917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.433927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.434201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.434211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.434304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.434313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.434612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.434621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.434830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.434860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.435107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.435136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.435376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.435406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.435534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.435544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.435714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.435724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.436009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.436018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.436283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.436293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.436535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.436544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.436654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.436665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.436844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.436853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.437024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.437034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.437304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.437334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.437587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.437831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.437865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.438094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.438123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.438356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.438386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.438564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.438574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.438772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.438802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.439024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.439053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.439357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.439387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.439659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.439669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.439936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.439946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.440190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.440200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.440334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.440344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.440537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.440546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.457 [2024-07-15 22:05:06.440687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.457 [2024-07-15 22:05:06.440697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.457 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.440951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.440980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.441244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.441545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.441574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.441804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.441833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.442142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.442172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.442350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.442381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.442542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.442738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.442767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.443014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.443044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.443355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.443386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.443600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.443610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.443875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.443885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.444058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.444068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.444264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.444293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.444577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.444644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.444867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.444935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.445255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.445290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.445459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.445489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.445721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.445752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.446011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.446025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.446231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.446245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.446497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.446510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.446773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.446803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.447029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.447058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.447245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.447276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.447561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.447591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.447875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.447905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.448144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.448182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.458 [2024-07-15 22:05:06.448474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.458 [2024-07-15 22:05:06.448510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.458 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.448760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.448773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.448954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.448968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.449182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.449415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.449429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.449625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.449639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.449821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.449852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.450030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.450060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.450282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.450313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.450573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.450603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.450786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.450816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.450994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.451024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.451259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.451289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.451529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.451559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.451717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.451730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.451977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.451991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.452949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.452963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.453158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.453172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.453372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.453385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.453579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.453593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.453793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.453807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.453940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.453954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.454156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.454171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.454297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.454314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.454503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.454648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.454662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.454921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.454936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.455151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.455166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.455363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.455379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.455573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.455589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.455790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.459 [2024-07-15 22:05:06.455805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.459 qpair failed and we were unable to recover it. 00:27:12.459 [2024-07-15 22:05:06.456075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.456090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.456382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.456413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.456653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.456856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.456892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.457199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.457419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.457451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.457641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.457672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.457975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.458006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.458235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.458268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.458522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.458553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.458756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.458772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.458911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.458943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.459265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.459298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.459527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.459559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.459808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.459839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.460071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.460101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.460329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.460362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.460600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.460631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.460940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.460971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.461145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.461176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.461439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.461477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.461697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.461713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.461986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.462001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.462268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.462300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.462612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.462644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.462820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.462852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.463025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.463056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.463245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.463276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.463499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.463531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.463697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.463729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.463946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.463982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.464260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.464329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.464568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.460 [2024-07-15 22:05:06.464603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.460 qpair failed and we were unable to recover it. 00:27:12.460 [2024-07-15 22:05:06.464817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.464828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.465019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.465050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.465346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.465380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.465552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.465583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.465870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.465901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.466055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.466086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.466371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.466403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.466551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.466563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.466751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.466781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.467008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.467039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.467411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.467452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.467581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.467778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.467809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.467975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.468007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.468256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.468288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.468596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.468628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.468913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.468944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.469200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.469254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.469462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.469493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.469672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.469684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.469884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.469915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.470151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.470183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.470430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.470462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.470718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.470729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.470967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.470999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.471167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.471199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.471471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.471502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.471789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.471832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.472071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.472102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.472323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.472355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.472595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.472626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.472920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.472951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.473170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.473202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.473429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.473461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.473720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.461 [2024-07-15 22:05:06.473748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.461 qpair failed and we were unable to recover it. 00:27:12.461 [2024-07-15 22:05:06.473979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.474010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.474246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.474277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.474655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.474724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.475025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.475060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.475288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.475321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.475536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.475568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.475804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.475835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.476069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.476085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.476351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.476383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.476577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.476609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.476855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.476886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.477145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.477161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.477434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.477465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.477651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.477681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.477939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.477982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.478184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.478199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.478409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.478424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.478628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.478659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.478945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.478975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.479101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.479131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.479462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.479502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.479774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.479790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.479976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.479991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.480248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.480279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.480455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.480767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.480782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.480910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.480926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.481108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.481302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.481317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.481556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.481574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.481670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.481684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.481915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.481946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.482179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.482210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.482487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.482519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.462 [2024-07-15 22:05:06.482749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.462 [2024-07-15 22:05:06.482780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.462 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.483071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.483392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.483424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.483731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.483762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.484005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.484036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.484288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.484321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.484578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.484608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.484844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.484859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.485877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.485909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.486170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.486200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.486438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.486470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.486709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.486724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.486865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.486881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.487020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.487051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.487239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.487272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.487581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.487612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.487749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.487765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.487889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.487937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.488166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.488199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.488511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.488580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.488791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.488826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.489046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.489078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.489321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.489355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.489527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.489558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.463 [2024-07-15 22:05:06.489715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.463 [2024-07-15 22:05:06.489748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.463 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.489980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.489995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.490146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.490362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.490378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.490586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.490602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.490749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.490765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.490972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.491003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.491242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.491274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.491577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.491608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.491885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.491899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.491992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.492006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.492241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.492273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.492529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.492560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.492730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.492760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.492985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.493180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.493211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.493507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.493539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.493711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.493742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.494065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.494096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.494323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.494356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.494578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.494641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.494837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.494850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.495054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.495086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.495405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.495440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.495669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.495700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.495923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.495935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.496131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.496454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.496488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.496750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.496784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.496986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.496998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.497191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.497203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.497444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.497456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.497663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.497675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.497859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.497899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.464 qpair failed and we were unable to recover it. 00:27:12.464 [2024-07-15 22:05:06.498119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.464 [2024-07-15 22:05:06.498150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.498328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.498359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.498537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.498570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.498757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.498789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.498920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.498932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.499233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.499245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.499436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.499448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.499719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.499750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.499933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.499964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.500127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.500158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.500398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.500429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.500650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.500680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.500965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.500995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.501219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.501269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.501554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.501585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.501748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.501779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.501999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.502031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.502291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.502323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.502567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.502597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.502885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.502916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.503197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.503208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.503406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.503438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.503602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.503633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.503853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.503884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.504181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.504212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.504466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.504497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.504812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.504843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.505031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.505062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.505274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.505286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.505502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.505514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.505697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.505708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.505842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.505853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.506103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.506134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.506355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.506387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.506628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.506659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.465 qpair failed and we were unable to recover it. 00:27:12.465 [2024-07-15 22:05:06.506824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.465 [2024-07-15 22:05:06.506836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.507051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.507082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.507315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.507346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.507512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.507544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.507765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.507779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.507964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.507995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.508154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.508185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.508445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.508477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.508800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.508831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.508998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.509030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.509206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.509248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.509496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.509527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.509730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.509742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.510006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.510036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.510263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.510294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.510580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.510611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.510836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.510866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.511194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.511241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.511433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.511465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.511669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.511699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.511892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.511903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.512128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.512308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.512340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.512514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.512545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.512727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.512758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.512925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.512956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.513268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.513300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.513626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.513657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.513827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.513870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.514071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.514083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.514324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.514336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.514521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.514532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.514664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.514676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.514862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.514874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.515004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.466 [2024-07-15 22:05:06.515016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.466 qpair failed and we were unable to recover it. 00:27:12.466 [2024-07-15 22:05:06.515201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.515212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.515327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.515343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.515461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.515473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.515744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.515776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.515999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.516215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.516256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.516591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.516811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.516841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.517004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.517035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.517266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.517520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.517550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.517860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.517892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.518124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.518155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.518378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.518409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.518586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.518617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.518838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.518849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.519030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.519061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.519357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.519388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.519606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.519619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.519807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.519819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.519948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.519960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.520206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.520245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.520480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.520511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.520747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.520779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.521933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.521964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.522130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.522161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.522379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.522411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.522571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.522600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.522771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.522802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.467 [2024-07-15 22:05:06.523049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.467 [2024-07-15 22:05:06.523079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.467 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.523370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.523402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.523625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.523661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.523987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.523998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.524841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.524853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.525161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.525191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.525427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.525459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.525623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.525654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.525785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.525798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.525989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.526019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.526249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.526281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.526444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.526475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.526640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.526671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.526911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.526941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.527179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.527190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.527383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.527415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.527577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.527840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.527871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.528106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.528137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.528376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.528408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.528571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.528603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.528764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.528795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.528973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.529005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.529219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.529234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.468 qpair failed and we were unable to recover it. 00:27:12.468 [2024-07-15 22:05:06.529483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.468 [2024-07-15 22:05:06.529515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.529735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.529766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.529995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.530960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.530990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.531213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.531273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.531513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.531544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.531813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.531825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.531960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.531972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.532048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.532061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.532262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.532274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.532499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.532510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.532782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.532794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.532928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.532940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.533056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.533069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.533284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.533296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.533425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.533438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.533575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.533606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.533827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.533858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.534919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.534931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.535095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.535123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.535360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.535391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.535617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.535648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.535892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.535923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.536240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.536272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.469 [2024-07-15 22:05:06.536450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.469 [2024-07-15 22:05:06.536480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.469 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.536667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.536873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.536903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.537201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.537212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.537406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.537418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.537656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.537870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.537902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.538111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.538122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.538364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.538377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.538509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.538521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.538801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.538832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.539009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.539040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.539331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.539363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.539587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.539618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.539778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.539789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.540959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.540971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.541177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.541208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.541435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.541466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.541686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.541717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.541972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.541983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.542229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.542241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.542435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.542447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.542568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.542612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.542854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.542886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.543181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.543212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.543505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.543536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.543778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.543808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.544030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.544060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.470 [2024-07-15 22:05:06.544257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.470 [2024-07-15 22:05:06.544288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.470 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.544455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.544486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.544708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.544739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.544960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.544990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.545271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.545283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.545407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.545438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.545657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.545687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.545930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.545961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.546251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.546283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.546442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.546473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.546701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.546732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.546978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.547009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.547184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.547215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.547456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.547488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.547795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.547826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.548045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.548076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.548276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.548288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.548488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.548518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.548687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.548717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.548967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.548999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.549159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.549190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.549423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.549455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.549689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.549720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.549873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.549904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.550156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.550186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.550429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.550441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.550665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.550701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.550867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.550879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.551008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.551020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.551209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.551220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.551428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.551460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.551701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.551731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.551949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.551961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.552145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.552156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.552299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.471 [2024-07-15 22:05:06.552312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.471 qpair failed and we were unable to recover it. 00:27:12.471 [2024-07-15 22:05:06.552499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.552510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.552646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.552657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.552780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.552792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.552982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.553012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.553295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.553326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.553563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.553594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.553763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.553774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.553984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.554128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.554339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.554523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.554714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.554923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.554953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.555125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.555156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.555370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.555402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.555642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.555672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.555906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.555936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.556212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.556228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.556341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.556353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.556464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.556476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.556586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.556598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.556810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.556822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.557059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.557071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.557251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.557263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.557461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.557492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.557651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.557964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.557995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.558160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.558191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.558499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.558530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.558830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.558861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.559098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.559128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.559361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.559398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.559628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.559658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.472 [2024-07-15 22:05:06.559824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.472 [2024-07-15 22:05:06.559855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.472 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.560095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.560107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.560334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.560365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.560677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.560709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.560894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.560925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.561209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.561486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.561517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.561747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.561778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.562008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.562039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.562266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.562298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.562524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.562555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.562802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.562832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.563002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.563014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.563191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.563202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.563387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.563399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.563544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.563556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.563752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.563783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.564006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.564035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.564196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.564529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.564559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.564745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.564776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.565005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.565017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.565309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.565339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.565599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.565630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.565908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.566062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.566093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.566239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.566269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.566510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.566540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.473 qpair failed and we were unable to recover it. 00:27:12.473 [2024-07-15 22:05:06.566852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.473 [2024-07-15 22:05:06.566883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.567053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.567083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.567285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.567297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.567482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.567494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.567629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.567641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.567894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.567925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.568077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.568108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.568277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.568309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.568547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.568577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.568770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.568800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.569045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.569058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.569182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.569211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.569458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.569490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.569742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.569772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.570034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.570064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.570280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.570312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.570532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.570562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.570799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.570830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.571090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.571121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.571419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.571450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.571671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.571702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.571878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.571889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.572012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.572055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.572211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.572250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.572490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.572521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.572751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.572782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.573032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.573062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.573301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.573313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.573491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.573502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.573633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.573664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.573899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.573930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.574241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.574272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.474 [2024-07-15 22:05:06.574519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.474 [2024-07-15 22:05:06.574550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.474 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.574714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.574746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.574908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.574919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.575135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.575165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.575354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.575385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.575645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.575676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.575923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.575954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.576206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.576506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.576794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.576824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.576988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.577019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.577191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.577222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.577383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.577414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.577646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.577677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.577842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.577873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.578105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.578136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.578327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.578339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.578459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.578470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.578710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.578746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.578933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.578964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.579292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.579324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.579550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.579816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.579846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.580015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.580046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.580294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.580306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.580437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.580449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.580566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.580578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.580832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.580863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.581141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.581171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.581424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.581455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.581680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.581710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.581938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.581949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.582149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.582160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.582346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.582377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.582547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.582736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.475 [2024-07-15 22:05:06.582767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.475 qpair failed and we were unable to recover it. 00:27:12.475 [2024-07-15 22:05:06.582890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.582921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.583144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.583156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.583278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.583289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.583493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.583505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.583757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.583788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.583945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.583976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.584195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.584234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.584463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.584494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.584724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.584759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.584881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.584893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.585074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.585410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.585442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.585672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.585703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.585914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.585926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.586130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.586160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.586330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.586361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.586657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.586688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.586902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.586913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.587038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.587051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.587239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.587251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.587423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.587435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.587640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.587652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.587763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.587797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.588087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.588117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.588355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.588387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.588618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.588649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.588873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.588903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.589121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.589151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.589305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.589317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.589534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.589546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.589728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.589740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.589895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.589925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.590158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.476 [2024-07-15 22:05:06.590188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.476 qpair failed and we were unable to recover it. 00:27:12.476 [2024-07-15 22:05:06.590380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.590412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.590564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.590594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.590841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.590872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.591020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.591033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.591258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.591289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.591549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.591801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.591832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.592114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.592144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.592282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.592313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.592555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.592586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.592765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.592796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.592954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.592985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.593187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.593198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.593391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.593403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.593518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.593530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.593719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.593731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.593923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.593954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.594114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.594144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.594451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.594481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.594698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.594729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.594969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.595000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.595205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.595216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.595395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.595407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.595605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.595617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.595824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.595855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.596037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.596068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.596359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.596390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.596613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.596643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.596867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.596898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.597125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.597138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.597251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.597263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.597461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.597493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.477 qpair failed and we were unable to recover it. 00:27:12.477 [2024-07-15 22:05:06.597779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.477 [2024-07-15 22:05:06.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.598018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.598031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.598202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.598213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.598402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.598433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.598676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.598706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.598858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.598869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.599133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.599164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.599397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.599428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.599688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.599718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.599917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.599929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.600131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.600161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.600330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.600362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.600497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.600526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.600810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.600840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.601145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.601176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.601420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.601607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.601637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.601859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.601890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.602120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.602132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.602422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.602434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.602617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.602648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.602793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.603044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.603074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.603311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.603342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.603563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.603630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.603929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.603999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.604241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.478 [2024-07-15 22:05:06.604296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.478 qpair failed and we were unable to recover it. 00:27:12.478 [2024-07-15 22:05:06.604497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.604510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.604714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.604726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.604854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.604884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.605142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.605172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.605346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.605377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.605604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.605635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.605857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.605888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.606951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.606982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.607248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.607279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.607565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.607596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.607763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.607794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.607928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.607940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.608127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.608158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.608409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.608440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.608664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.608695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.608870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.608901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.609118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.609148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.609409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.609440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.609679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.609710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.610002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.610033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.610262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.610293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.610549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.610579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.610748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.610778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.611000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.611031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.611201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.611213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.611458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.611470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.611645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.611658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.611850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.479 [2024-07-15 22:05:06.611861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.479 qpair failed and we were unable to recover it. 00:27:12.479 [2024-07-15 22:05:06.612039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.612051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.612334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.612348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.612457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.612488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.612724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.612754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.613047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.613117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.613447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.613467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.613686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.613702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.613951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.613967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.614262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.614293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.614520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.614552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.614711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.614741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.614904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.614943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.615073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.615087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.615310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.615342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.615602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.615634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.615859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.615890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.616123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.616154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.616337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.616378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.616631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.616662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.616897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.616929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.617089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.617120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.617284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.617300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.617563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.617579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.617829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.617862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.618017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.618049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.618264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.618295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.618531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.618563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.618780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.618811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.618981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.619024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.619202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.619217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.619417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.619433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.619624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.619656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.619874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.619905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.620132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.620164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.620381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.620413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.620660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.480 [2024-07-15 22:05:06.620692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.480 qpair failed and we were unable to recover it. 00:27:12.480 [2024-07-15 22:05:06.620855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.620887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.621183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.621233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.621428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.621444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.621627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.621642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.621866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.621898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.622208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.622248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.622465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.622481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.622616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.622631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.622855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.622894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.623087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.623119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.623349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.623618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.623650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.623899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.623931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.624096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.624111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.624311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.624343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.624586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.624617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.624840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.625049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.625079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.625317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.625349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.625582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.625614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.625898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.625929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.626172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.626202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.626393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.626424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.626640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.626671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.626902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.626933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.627101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.627132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.627409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.627440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.627742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.627773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.628001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.628038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.628174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.628190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.628391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.628423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.628707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.628738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.628914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.628931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.629081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.629111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.629281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.629313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.629545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.629581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.481 [2024-07-15 22:05:06.629886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.481 [2024-07-15 22:05:06.629916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.481 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.630950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.630981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.631136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.631167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.631394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.631427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.631655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.631686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.631866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.631897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.632206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.632248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.632538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.632569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.632844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.632875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.633147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.633177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.633465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.633481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.633689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.633704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.633992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.634022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.634253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.634284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.634515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.634546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.634768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.634799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.635124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.635155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.635402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.635434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.635594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.635625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.635912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.635942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.636179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.636194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.636393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.636410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.636601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.636632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.636915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.636947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.637252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.637283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.637591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.637622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.637797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.637828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.638047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.638078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.638292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.638308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.638583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.638614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.638898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.638928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.639165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.639180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.639407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.639424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.482 [2024-07-15 22:05:06.639570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.482 [2024-07-15 22:05:06.639585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.482 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.639728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.639937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.639953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.640137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.640153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.640368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.640393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.640593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.640625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.640856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.640887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.641179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.641210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.641439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.641470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.641729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.641760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.642053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.642083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.642331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.642347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.642538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.642553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.642828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.642859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.643083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.643114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.643340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.643356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.643566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.643581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.643736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.643766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.643996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.644027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.644256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.644288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.644575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.644606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.644828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.644860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.645143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.645158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.645337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.645353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.645644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.645959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.645989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.646166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.646197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.646427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.646670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.646701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.646955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.646986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.647235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.647251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.647451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.483 qpair failed and we were unable to recover it. 00:27:12.483 [2024-07-15 22:05:06.647659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.483 [2024-07-15 22:05:06.647675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.647954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.647969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.648194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.648235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.648415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.648446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.648747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.648777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.649000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.649031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.649196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.649235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.649522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.649553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.649843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.649873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.650100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.650130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.650429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.650445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.650580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.650595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.650790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.650805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.484 [2024-07-15 22:05:06.651109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.484 [2024-07-15 22:05:06.651140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.484 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.651435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.651469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.651652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.651684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.651888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.651918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.652097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.652128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.652370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.652386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.652597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.652612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.652861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.652876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.653126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.653156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.653378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.653410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.653648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.653678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.653834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.653870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.654174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.654205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.654363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.654378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.654559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.654574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.654778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.654808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.655028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.655059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.655315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.655348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.655698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.655729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.656015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.656046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.656207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.656254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.656485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.656517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.656705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.656736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.656968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.657000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.657328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.657360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.657646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.657681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.657887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.657917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.658176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.658191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.658340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.658356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.658616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.658647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.658816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.658846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.659176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.659207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.659348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.659379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.659628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.659659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.659894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.659924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.660169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.660199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.660358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.660374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.660526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.660556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.660844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.660880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.661065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.661238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.661389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.661538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.661773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.752 qpair failed and we were unable to recover it. 00:27:12.752 [2024-07-15 22:05:06.661994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.752 [2024-07-15 22:05:06.662025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.662143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.662173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.662401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.662417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.662554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.662584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.662867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.662898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.663083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.663114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.663298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.663330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.663525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.663556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.663868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.663899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.664130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.664161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.664383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.664415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.664670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.664701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.664872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.664902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.665105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.665121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.665249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.665281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.665447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.665477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.665643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.665673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.665905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.665936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.666235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.666250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.666331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.666346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.666550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.666565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.666841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.666859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.667070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.667085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.667312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.667327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.667524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.667540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.667639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.667653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.667847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.667878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.668181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.668212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.668538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.668569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.668801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.668833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.669065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.669096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.669337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.669368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.669545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.669576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.669760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.669790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.670035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.670066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.670346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.670381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.670504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.670531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.670723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.670736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.670913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.670925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.671164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.671194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.671479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.671548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.671849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.671882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.672194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.672232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.672410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.672425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.672548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.672579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.672814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.672845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.673066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.673097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.673318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.673333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.673528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.673543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.673754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.673784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.674025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.674055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.674278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.674309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.674529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.674561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.674720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.674750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.674990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.675021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.675272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.675315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.675561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.675576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.675828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.675860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.676113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.676144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.676395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.676410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.676711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.676742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.676869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.676900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.677247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.677287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.677598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.677630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.677865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.677897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.753 qpair failed and we were unable to recover it. 00:27:12.753 [2024-07-15 22:05:06.678080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.753 [2024-07-15 22:05:06.678111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.678346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.678379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.678608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.678623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.678814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.678830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.679113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.679144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.679447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.679716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.679748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.679982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.680014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.680237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.680395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.680410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.680707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.680747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.680928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.680959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.681181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.681213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.681542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.681574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.681795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.681826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.682057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.682088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.682236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.682268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.682507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.682523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.682633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.682649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.682916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.682947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.683178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.683210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.683402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.683418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.683601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.683617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.683804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.683819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.684085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.684117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.684404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.684436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.684762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.684793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.685103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.685134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.685379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.685411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.685675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.685707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.686037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.686067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.686356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.686387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.686615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.686648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.686913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.687138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.687169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.687465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.687497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.687806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.687837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.688107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.688175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.688375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.688416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.688619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.688634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.688941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.688972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.689193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.689235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.689468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.689499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.689753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.689785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.690016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.690047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.690335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.690367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.690597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.690629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.690815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.690846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.691064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.691379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.691410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.691727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.691766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.691954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.691998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.692144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.692159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.692418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.692434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.692562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.692578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.692768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.692799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.693044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.693076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.693241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.693273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.693581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.693612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.693795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.693826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.694061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.694092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.694256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.694271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.694464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.694496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.694712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.694743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.695011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.695044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.695286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.695318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.695503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.695534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.754 qpair failed and we were unable to recover it. 00:27:12.754 [2024-07-15 22:05:06.695706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.754 [2024-07-15 22:05:06.695738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.695911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.695942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.696151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.696167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.696365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.696398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.696708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.696739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.697048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.697079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.697341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.697373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.697670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.697702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.697922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.697953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.698203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.698244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.698425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.698457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.698764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.698796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.699030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.699061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.699342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.699359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.699560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.699575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.699788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.699993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.700009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.700288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.700305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.700449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.700464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.700646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.700661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.700807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.700822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.701023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.701055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.701291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.701324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.701476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.701511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.701733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.701749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.701932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.701948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.702194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.702209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.702342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.702358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.702521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.702553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.702780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.702812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.702966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.702997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.703233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.703265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.703489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.703520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.703674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.703705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.703933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.703964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.704198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.704213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.704461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.704494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.704758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.704789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.705068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.705099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.705413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.705444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.705689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.705720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.705864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.705895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.706166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.706197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.706440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.706456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.706709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.706740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.706971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.707003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.707262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.707293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.707471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.707499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.707734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.707763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.708044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.708072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.708246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.708275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.708498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.708526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.708749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.708778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.708942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.708969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.709189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.709202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.709392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.709405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.709535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.709548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.709748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.709760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.709940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.709952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.710167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.710195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.710385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.710399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.710523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.710537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.710725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.710939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.710956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.711139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.711153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.711403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.711434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.711663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.711694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.711946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.711975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.712201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.712215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.712416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.712431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.712566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.712581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.712785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.712800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.713010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.713039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.713223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.713262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.713414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.713444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.713675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.755 [2024-07-15 22:05:06.713691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.755 qpair failed and we were unable to recover it. 00:27:12.755 [2024-07-15 22:05:06.713813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.713843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.714130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.714161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.714467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.714498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.714670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.714702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.714885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.714916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.715091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.715123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.715347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.715379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.715676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.715691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.715891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.715907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.716098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.716129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.716381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.716413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.716727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.716759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.716992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.717022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.717188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.717219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.717405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.717610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.717641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.717809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.717824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.718058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.718237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.718362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.718552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.718744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.718975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.719006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.719291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.719324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.719500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.719532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.719744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.719760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.720009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.720040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.720204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.720244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.720506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.720538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.720717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.720733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.721033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.721048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.721203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.721222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.721348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.721363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.721566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.721597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.721884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.721914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.722147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.722183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.722500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.722532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.722681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.722712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.722939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.722970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.723267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.723301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.723477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.723508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.723731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.723747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.723964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.723995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.724289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.724321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.724536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.724566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.724783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.724815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.725073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.725104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.725335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.725351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.725482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.725497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.725710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.725740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.725905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.725937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.726108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.726139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.726306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.726338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.726561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.726756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.726791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.727015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.727059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.727269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.727283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.727410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.727424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.727622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.727654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.727884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.727915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.728081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.728112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.728347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.728363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.728549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.728564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.728709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.728723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.728925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.729092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.729122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.729279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.729310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.729543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.729574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.729812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.729827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.730867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.730882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.756 [2024-07-15 22:05:06.731032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.756 [2024-07-15 22:05:06.731048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.756 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.731246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.731262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.731490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.731506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.731704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.731719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.731928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.731944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.732091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.732106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.732244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.732535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.732566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.732861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.732893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.733182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.733214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.733407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.733422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.733675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.733706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.734016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.734047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.734351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.734383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.734628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.734659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.734969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.734999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.735221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.735278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.735561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.735577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.735745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.735775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.735994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.736030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.736250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.736283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.736573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.736605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.736849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.736880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.737120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.737151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.737408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.737424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.737575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.737605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.737820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.737850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.738015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.738046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.738386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.738418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.738587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.738618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.738860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.738891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.739174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.739204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.739435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.739468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.739726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.739741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.739934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.739950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.740234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.740266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.740522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.740553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.740769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.740800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.740989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.741020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.741247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.741279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.741433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.741449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.741565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.741605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.741855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.741885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.742128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.742170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.742315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.742330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.742570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.742601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.742784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.742815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.743032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.743063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.743292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.743325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.743545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.743561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.743779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.743810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.744079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.744109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.744393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.744439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.744669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.744685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.744872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.744888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.745087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.745119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.745445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.745667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.745682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.745869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.745901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.746073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.746110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.746330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.746372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.746601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.746618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.746745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.746760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.747014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.747030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.747111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.747126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.747356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.747388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.747568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.747600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.747768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.747799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.748057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.748088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.748356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.748387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.757 [2024-07-15 22:05:06.748627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.757 [2024-07-15 22:05:06.748643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.757 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.748839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.748855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.749120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.749151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.749434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.749450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.749647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.749662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.749794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.749809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.749901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.749916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.750123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.750139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.750411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.750428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.750674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.750690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.750824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.750840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.751037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.751069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.751301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.751333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.751656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.751688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.751928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.751959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.752177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.752207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.752496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.752863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.753139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.753170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.753350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.753381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.753684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.753715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.753940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.753971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.754139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.754171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.754449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.754481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.754642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.754672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.754854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.754885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.755059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.755090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.755379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.755394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.755536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.755567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.755851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.755887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.756179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.756210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.756528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.756787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.756818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.757045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.757078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.757242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.757274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.757521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.757552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.757768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.757783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.757968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.757983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.758180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.758410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.758446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.758684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.758716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.758937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.758969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.759193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.759250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.759498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.759529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.759748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.759779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.760030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.760062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.760236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.760252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.760380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.760395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.760575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.760590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.760778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.760793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.761053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.761085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.761244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.761276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.761592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.761623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.761797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.761829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.762017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.762048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.762345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.762376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.762564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.762608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.762790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.762941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.762957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.763175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.763206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.763390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.763421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.763643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.763680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.763866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.763882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.764067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.764082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.764282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.764314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.764538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.764569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.764749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.764780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.765000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.765032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.765198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.765237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.765466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.765502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.765729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.765745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.766024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.766040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.766295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.766310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.766440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.766456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.766644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.766676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.766922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.766954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.758 qpair failed and we were unable to recover it. 00:27:12.758 [2024-07-15 22:05:06.767139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.758 [2024-07-15 22:05:06.767171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.767359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.767375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.767560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.767575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.767819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.767850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.768069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.768101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.768334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.768350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.768564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.768756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.768787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.769979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.769995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.770199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.770238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.770474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.770505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.770740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.770771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.771012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.771042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.771279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.771311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.771466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.771482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.771726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.771757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.771931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.771962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.772131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.772163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.772389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.772405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.772658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.772688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.772977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.773009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.773245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.773278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.773509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.773541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.773740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.773756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.774064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.774095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.774264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.774297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.774526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.774557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.774797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.775092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.775141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.775331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.775363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.775604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.775635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.775840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.776049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.776065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.776251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.776267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.776388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.776403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.776689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.776720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.776938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.776970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.777256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.777287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.777446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.777478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.777718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.777749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.777978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.778009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.778260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.778305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.778599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.778631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.778868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.778899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.779052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.779082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.779266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.779298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.779525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.779540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.779813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.779828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.779979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.779994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.780240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.780256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.780394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.780410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.780642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.780673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.780874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.780906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.781218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.781265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.781475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.781491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.781699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.781715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.781838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.781853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.781961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.781995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.782249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.782281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.782541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.782572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.782750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.782781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.783002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.783033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.783317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.783350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.783603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.783633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.759 qpair failed and we were unable to recover it. 00:27:12.759 [2024-07-15 22:05:06.783864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.759 [2024-07-15 22:05:06.783895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.784114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.784145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.784445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.784478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.784768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.784799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.785047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.785082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.785375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.785409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.785583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.785623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.785868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.786161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.786331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.786432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.786613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.786822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.786976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.787007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.787264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.787296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.787607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.787639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.787853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.787869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.788087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.788328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.788361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.788641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.788656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.788794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.788825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.789068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.789100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.789352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.789614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.789630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.789877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.789893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.790087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.790103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.790359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.790391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.790509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.790541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.790708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.790740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.791042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.791073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.791360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.791392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.791627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.791659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.791911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.791944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.792175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.792382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.792398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.792605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.792637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.792883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.792914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.793205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.793243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.793461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.793493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.793694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.793725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.793956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.793988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.794246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.794288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.794493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.794509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.794634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.794651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.794844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.794881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.795057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.795088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.795259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.795292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.795481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.795512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.795743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.795758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.796006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.796021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.796211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.796231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.796364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.796380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.796640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.796671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.796959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.796991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.797213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.797252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.797505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.797536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.797843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.797874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.798046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.798078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.798371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.798403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.798663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.798926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.799123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.799154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.799375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.799407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.799691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.799706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.799898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.799915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.800163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.800197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.800519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.800551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.800731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.800771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.800901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.800916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.801182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.801215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.801455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.801487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.801712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.801750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.801999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.802035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.802300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.802317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.802461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.802476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.802680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.802711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.803040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.803071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.760 [2024-07-15 22:05:06.803238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.760 [2024-07-15 22:05:06.803271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.760 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.803597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.803637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.803836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.804047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.804061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.804263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.804278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.804411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.804426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.804701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.804731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.804957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.804996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.805286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.805318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.805587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.805618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.805837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.805852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.806053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.806068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.806293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.806324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.806551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.806583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.806765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.806796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.807068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.807099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.807415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.807447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.807666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.807698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.807930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.807961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.808094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.808125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.808350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.808365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.808571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.808603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.808887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.808919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.809151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.809182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.809430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.809462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.809754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.809769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.809964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.809979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.810109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.810125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.810338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.810369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.810590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.810621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.810799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.810841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.811035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.811050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.811329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.811361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.811678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.811710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.811967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.811988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.812231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.812259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.812402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.812415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.812555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.812586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.812821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.812852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.813094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.813126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.813359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.813391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.813632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.813663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.813896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.813907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.814093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.814105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.814250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.814283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.814543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.814574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.814831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.814862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.815079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.815118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.815390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.815423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.815575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.815606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.815891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.815922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.816089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.816119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.816349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.816382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.816567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.816597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.816880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.816910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.817061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.817091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.817347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.817379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.817551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.817582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.817883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.817925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.818098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.818129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.818351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.818382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.818633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.818644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.818777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.818789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.818919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.818950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.819257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.819289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.819547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.819578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.819862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.819893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.820070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.820101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.820384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.820415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.820649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.820681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.820846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.761 [2024-07-15 22:05:06.820877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.761 qpair failed and we were unable to recover it. 00:27:12.761 [2024-07-15 22:05:06.821114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.821145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.821348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.821361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.821485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.821497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.821651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.821683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.821955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.821987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.822274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.822306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.822542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.822573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.822804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.822835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.823091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.823122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.823353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.823385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.823697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.823728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.823898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.823930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.824082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.824113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.824300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.824331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.824616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.824656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.824899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.824910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.825090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.825103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.825259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.825271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.825481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.825513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.825737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.825770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.825948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.825979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.826269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.826301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.826579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.826610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.826755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.826767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.826885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.826916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.827136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.827167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.827338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.827370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.827550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.827581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.827805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.828128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.828160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.828313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.828345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.828649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.828680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.828893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.828905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.829207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.829251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.829563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.829600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.829786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.829798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.829940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.829971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.830145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.830176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.830440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.830472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.830654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.830685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.830972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.831002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.831296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.831328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.831481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.831512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.831824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.831860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.832106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.832137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.832379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.832411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.832661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.832692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.832898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.832909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.833102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.833133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.833294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.833326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.833571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.833602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.833837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.833868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.834090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.834121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.834342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.834374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.834598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.834630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.834855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.834886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.835107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.835138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.835382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.835413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.835577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.835607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.835893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.835936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.836170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.836201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.836496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.836528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.836833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.836865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.837108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.837139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.837389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.837421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.837680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.837712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.837918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.837930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.838112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.838124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.838254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.838286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.838479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.838511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.838752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.838784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.839030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.839041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.839285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.839317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.839552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.839584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.839722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.839753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-15 22:05:06.840046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-15 22:05:06.840058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.840184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.840196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.840385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.840418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.840600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.840631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.840863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.840894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.841013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.841045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.841270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.841302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.841480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.841511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.841732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.841768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.842047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.842078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.842328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.842359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.842690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.842720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.842904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.842935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.843136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.843167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.843433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.843465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.843706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.843737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.843941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.843952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.844150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.844181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.844474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.844505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.844787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.844820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.845012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.845043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.845220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.845259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.845471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.845483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.845633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.845828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.845858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.846102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.846133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.846372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.846404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.846694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.846725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.846944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.846976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.847218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.847256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.847547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.847580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.847750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.847781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.847939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.847950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.848136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.848149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.848266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.848279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.848468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.848481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.848664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.848695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.848913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.848945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.849244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.849544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.849575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.849707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.849738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.849999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.850031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.850262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.850294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.850462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.850493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.850745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.850756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.850941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.850953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.851180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.851211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.851381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.851412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.851647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.851683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.851850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.851883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.852124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.852155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.852385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.852417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.852639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.852670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.852980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.853238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.853389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.853587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.853769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.853918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.853949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.854187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.854217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.854452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.854483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.854710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.854721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.854861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.854873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.855002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.855034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.855188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.855219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.855472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.855504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.855658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.855669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.855884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.855915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.856143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.856174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.856374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.856407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.856627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.856659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.856902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.856913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.857171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.857202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.857444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.857476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.857651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.857682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.857835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.857846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.857979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.858012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-15 22:05:06.858324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-15 22:05:06.858363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.858583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.858615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.858862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.858893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.859139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.859171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.859332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.859363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.859542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.859573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.859753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.859784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.860015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.860047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.860267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.860298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.860468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.860499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.860720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.860912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.860950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.861263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.861294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.861539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.861570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.861752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.861783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.861954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.861985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.862215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.862259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.862438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.862469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.862778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.862817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.862927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.862939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.863137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.863168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.863350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.863382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.863564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.863596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.863911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.863941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.864171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.864202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.864359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.864390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.864550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.864561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.864836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.864867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.865058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.865090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.865386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.865418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.865594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.865625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.865830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.865842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.865966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.865978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.866170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.866201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.866375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.866406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.866585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.866616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.866856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.866868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.866993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.867005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.867164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.867382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.867414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.867656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.867687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.867914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.867925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.868133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.868164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.868327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.868361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.868650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.868820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.868851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.868985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.869017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.869189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.869220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.869482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.869515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.869704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.869735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.869955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.869968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.870085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.870098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.870292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.870324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.870545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.870575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.870746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.870777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.871061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.871073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.871273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.871285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.871472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.871483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.871608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.871639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.871815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.871847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.872138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.872168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.872351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.872386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.872548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.872580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.872796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.872809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.872931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.872963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.873139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.873170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.873331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.873363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.873608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.873639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.873871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.873901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.874124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.874155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.874387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.874419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.874573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-15 22:05:06.874605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-15 22:05:06.874783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.874814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.874979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.875010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.875254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.875286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.875525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.875536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.875796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.875807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.875944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.875956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.876074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.876086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.876232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.876264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.876554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.876584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.876756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.876788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.876944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.876957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.877107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.877137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.877297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.877329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.877640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.877672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.877886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.877897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.878094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.878106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.878316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.878348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.878525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.878556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.878802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.878833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.878987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.879025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.879183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.879215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.879431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.879463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.879628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.879659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.879872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.879884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.880065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.880362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.880395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.880557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.880569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.880751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.880764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.880991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.881023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.881202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.881240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.881464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.881496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.881644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.881655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.881842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.881873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.882106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.882138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.882429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.882462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.882632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.882663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.882980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.883010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.883163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.883194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.883409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.883480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.883739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.883773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.884102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.884135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.884307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.884341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.884572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.884604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.884832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.884848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.885000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.885016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.885233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.885265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.885511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.885543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.885774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.885807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.885995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.886026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.886297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.886555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.886587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.886875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.886906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.887143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.887174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.887410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.887442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.887752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.887784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.887939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.887955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.888096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.888129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.888389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.888422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.888730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.888762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.888924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.888959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.889248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.889280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.889510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.889541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.889770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.889802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.890056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.890088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.890334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.890366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.890606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.890636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.890862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.890893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.891177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.891209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.891458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.891490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.891686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.891718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.892027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.892058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.892222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.892264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.892577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.892609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.892794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.892831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.893014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-15 22:05:06.893032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-15 22:05:06.893321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.893354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.893668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.893699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.893938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.893969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.894136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.894403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.894436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.894611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.894643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.894968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.894999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.895327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.895359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.895539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.895571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.895795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.895827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.896184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.896215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.896486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.896519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.896753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.896784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.897080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.897112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.897331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.897364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.897541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.897572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.897738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.897770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.897942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.897972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.898204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.898242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.898482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.898513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.898695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.898726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.898945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.898961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.899171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.899202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.899375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.899407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.899588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.899625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.899850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.899866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.900079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.900107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.900403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.900445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.900735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.900767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.901022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.901053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.901279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.901311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.901534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.901565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.901852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.901883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.902191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.902203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.902520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.902532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.902667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.902679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.902831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.902842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.903022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.903054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.903352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.903402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.903741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.904035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.904066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.904400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.904432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.904672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.904704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.905003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.905042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.905219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.905234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.905429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.905440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.905638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.905649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.905880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.905911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.906089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.906121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.906378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.906650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.906682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.907006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.907038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.907348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.907381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.907562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.907594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.907795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.907827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.908051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.908082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.908302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.908333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.908518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.908549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.908739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.908771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.908946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.908987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.909220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.909234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.909477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.909508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.909770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.909802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.910054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.910086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.910296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.910333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.910573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.910605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.910783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.910794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.910932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.910944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.911119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.911130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.911361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.911373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.911504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-15 22:05:06.911753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-15 22:05:06.911764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.911882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.911894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.912144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.912174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.912520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.912552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.912838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.912870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.913200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.913239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.913415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.913446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.913729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.913741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.914040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.914071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.914313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.914346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.914661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.914693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.914930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.914962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.915211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.915249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.915482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.915514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.915763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.915794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.916080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.916384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.916396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.916541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.916553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.916747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.916760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.917083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.917115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.917442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.917514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.917880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.917963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.918203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.918221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.918423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.918439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.918591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.918607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.918754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.918786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.919047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.919079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.919322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.919356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.919610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.919642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.919822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.919838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.919981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.920273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.920306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.920595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.920627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.920887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.920927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.921167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.921199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.921524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.921556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.921736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.921767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.922059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.922091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.922405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.922438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.922666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.922680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.922967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.922983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.923271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.923305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.923538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.923569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.923807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.923848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.923974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.923989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.924245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.924277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.924472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.924504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.924814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.924846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.925018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.925050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.925319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.925351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.925598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.925631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.925870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.925901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.926199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.926215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.926366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.926382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.926675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.926706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.927009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.927050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.927345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.927363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.927577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.927609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.927870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.927901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.928124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.928156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.928465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.928514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.928826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.928859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.929101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.929132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.929436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.929469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.929689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.929720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.930007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.930039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.930296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.930312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.930604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.930636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.930880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.930912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.931149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.931181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.931374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.931406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.931649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.931680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.931949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.931980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.932295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.932327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.932580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.932612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-15 22:05:06.932845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-15 22:05:06.932877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.933100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.933384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.933416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.933616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.933647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.933876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.933907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.934212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.934232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.934408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.934442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.934621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.934653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.934928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.934960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.935195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.935236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.935462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.935493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.935684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.935715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.936022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.936058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.936359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.936391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.936637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.936669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.936846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.936878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.937101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.937133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.937341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.937374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.937689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.937721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.937986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.938017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.938309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.938342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.938514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.938545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.938836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.938868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.939091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.939106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.939424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.939456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.939631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.939663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.939862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.939894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.940220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.940272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.940515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.940547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.940736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.940770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.941010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.941026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.941246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.941262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.941405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.941438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.941622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.941654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.941887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.941919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.942212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.942259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.942501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.942533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.942831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.942862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.943210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.943252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.943506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.943543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.943719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.943750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.944026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.944042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.944325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.944341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.944553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.944584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.944739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.944771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.945023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.945053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.945351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.945383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.945627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.945660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.945923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.946063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.946079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.946240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.946273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.946513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.946544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.946827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.946860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.947086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.947118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.947462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.947495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.947675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.947706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.947982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.948014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.948318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.948334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.948472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.948504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.948740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.948757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.949016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.949048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.949274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.949484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.949515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.949758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.949790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.950082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.950114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.950357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.950629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.950665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.950935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.950967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.951201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.951216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.951386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.951402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.951622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.951654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.951965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-15 22:05:06.951997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-15 22:05:06.952317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.952364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.952591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.952636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.952786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.952802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.953123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.953139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.953329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.953347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.953505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.953521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.953682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.953714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.954051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.954083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.954376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.954410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.954691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.954723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.955045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.955062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.955296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.955329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.955535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.955567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.955754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.955786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.956077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.956109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.956390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.956406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.956619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.956651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.956843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.956875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.957054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.957098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.957361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.957378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.957593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.957610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.957798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.957814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.958047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.958063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.958255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.958272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.958526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.958559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.958791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.958822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.959045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.959077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.959260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.959277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.959481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.959514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.959831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.959863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.960181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.960197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.960388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.960404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.960615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.960631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.960859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.960876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.961097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.961129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.961359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.961395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.961591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.961622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.961932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.961963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.962205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.962246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.962418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.962450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.962684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.962716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.963000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.963040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.963305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.963321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.963535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.963552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.963690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.963706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.963867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.963899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.964155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.964187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.964448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.964481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.964804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.964837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.965154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.965186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.965398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.965659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.965691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.965865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.965897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.966134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.966166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.966366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.966399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.966696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.966729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.966979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.967011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.967286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.967319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.967576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.967608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.967778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.967795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.968027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.968298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.968331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.968493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.968531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.968824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.968856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.969167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.969198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.969399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.969432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.969633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.969664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.969905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.969938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.970239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.970256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.970414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.970430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.970636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.970652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.970866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.971237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.971270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.971510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.971541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.971790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.971822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.972193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.972249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.972497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-15 22:05:06.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-15 22:05:06.972700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.972730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.973049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.973079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.973378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.973410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.973730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.973763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.974024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.974070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.974392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.974409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.974621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.974637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.974827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.974843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.975136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.975168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.975413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.975446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.975706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.975738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.975996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.976014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.976202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.976221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.976491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.976507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.976646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.976662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.976820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.977187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.977218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.977475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.977507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.977691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.977724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.978062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.978094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.978328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.978361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.978608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.978641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.978881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.978913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.979156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.979196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.979402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.979420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.979679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.979696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.980927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.980959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.981203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.981220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.981397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.981414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.981646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.981662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.981817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.981832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.981995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.982010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.982270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.982288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.982543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.982558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.982803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.982819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.983029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.983046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.983244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.983261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-15 22:05:06.983413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-15 22:05:06.983430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.983576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.983609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.983839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.983872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.984058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.984090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.984359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.984391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.984637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.984669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.984842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.039 [2024-07-15 22:05:06.984861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.039 qpair failed and we were unable to recover it. 00:27:13.039 [2024-07-15 22:05:06.985011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.985042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.985234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.985266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.985504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.985535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.985717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.985749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.985993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.986024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.986210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.986263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.986570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.986602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.986962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.986993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.987246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.987278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.987647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.987719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.987985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.988022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.988353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.988426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.988707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.988742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.989018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.989049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.989337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.989369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.989662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.989693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.989940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.990212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.990253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.990530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.990562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.990814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.990845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.991091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.991122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.991438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.991454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.991678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.991724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.992011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.992044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.992347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.992380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.992629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.992661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.992981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.993012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.993257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.993290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.993535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.993566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.993743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.993775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.994015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.994032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.994346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.994379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.994679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.994711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.995061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.995093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.995399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.995433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.995625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.995656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.995892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.995941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.996141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.996159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.996359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.996375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.996575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.996591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.996757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.996773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.997045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.997061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.997347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.997364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.997552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.997593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.997905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.997937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.998277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.998310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.998651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.998683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.999002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.999035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.999353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.999385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.999563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.999596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:06.999770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:06.999803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.000147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.000178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.000439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.000456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.000668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.000683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.000816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.000832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.001046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.001078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.001313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.001346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.001648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.001681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.002003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.002034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.002327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.002359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.002659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.002691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.002935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.002967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.003206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.003259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.003415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.003433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.003646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.003664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.003880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.003912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.004083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.004115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.004289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.004321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.004567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.004598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.004841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.004874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.005081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.005113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.005290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.005307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.005521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.005537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.040 [2024-07-15 22:05:07.005801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.040 [2024-07-15 22:05:07.005833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.040 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.006124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.006156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.006410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.006427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.006600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.006631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.006955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.006987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.007234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.007267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.007530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.007561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.007753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.007785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.008156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.008188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.008431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.008463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.008640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.008671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.008988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.009004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.009196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.009212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.009408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.009425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.009576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.009607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.009768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.009799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.010036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.010067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.010359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.010397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.010730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.010762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.011005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.011036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.011296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.011329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.011575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.011607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.011793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.011826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.012119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.012151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.012451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.012468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.012681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.012697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.012847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.012863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.013079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.013095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.013427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.013460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.013663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.013696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.014037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.014069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.014243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.014276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.014475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.014507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.014755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.014786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.015031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.015062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.015394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.015410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.015680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.015696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.015934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.015971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.016294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.016327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.016644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.016676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.016996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.017028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.017304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.017338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.017635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.017668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.017866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.017898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.018212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.018269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.018462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.018495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.018744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.018960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.018991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.019295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.019329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.019588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.019621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.019897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.019929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.020247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.020280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.020464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.020496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.020687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.020719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.021054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.021085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.021309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.021327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.021568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.021600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.021853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.021884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.022107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.022149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.022366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.022385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.022528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.022749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.022780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.023053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.023085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.023351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.023385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.023624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.023657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.023848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.023880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.024198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.024238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.024431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.024463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.024715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.024746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.025081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.025113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.025272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.025305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.025553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.025593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.025894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.025926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.026183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.026213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.026413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.026428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.026575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.026860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.026892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.041 [2024-07-15 22:05:07.027191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.041 [2024-07-15 22:05:07.027223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.041 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.027437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.027469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.027698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.027729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.028058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.028380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.028413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.028664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.028696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.029003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.029211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.029233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.029459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.029491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.029736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.029769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.030045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.030076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.030384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.030416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.030679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.030711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.030892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.030924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.031218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.031260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.031484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.031500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.031782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.031813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.032147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.032179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.032424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.032456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.032646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.032678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.032915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.032948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.033219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.033262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.033558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.033589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.033888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.033920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.034167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.034199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.034397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.034429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.034612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.034644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.034993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.035026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.035291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.035323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.035580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.035612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.035964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.035997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.036323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.036355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.036535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.036567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.036744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.036776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.037095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.037112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.037343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.037359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.037529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.037545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.037820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.037852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.038083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.038115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.038347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.038380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.038624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.038640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.038832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.038849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.039142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.039159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.039409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.039426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.039595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.039612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.039876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.039908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.040182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.040198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.040370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.040386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.040630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.040662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.041006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.041038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.041341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.041358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.041516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.041533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.041681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.041699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.041893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.041925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.042255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.042287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.042531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.042563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.042830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.042873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.043015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.043031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.043343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.043360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.043511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.043527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.043845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.043877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.044065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.044102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.044360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.044392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.044740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.044772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.044955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.044987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.045242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.045477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.045510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.045710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.045742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.042 [2024-07-15 22:05:07.045987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.042 [2024-07-15 22:05:07.046019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.042 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.046276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.046293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.046452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.046484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.046737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.046770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.047087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.047118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.047440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.047472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.047678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.047710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.047900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.047933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.048123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.048140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.048356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.048388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.048638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.048671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.048912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.048944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.049220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.049260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.049508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.049524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.049688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.049705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.049999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.050032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.050289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.050322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.050672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.050704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.050901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.051197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.051236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.051427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.051459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.051710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.051743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.052082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.052114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.052385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.052402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.052585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.052618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.052893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.052926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.053165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.053196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.053421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.053453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.053692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.053724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.053977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.054010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.054261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.054279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.054477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.054494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.054781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.055118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.055173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.055508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.055541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.055847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.055879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.056192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.056235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.056505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.056538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.056790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.056822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.057135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.057167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.057357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.057389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.057747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.057779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.058097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.058114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.058422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.058454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.058712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.058744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.059095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.059127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.059382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.059416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.059631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.059664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.059989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.060021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.060212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.060232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.060444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.060477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.060734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.061083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.061100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.061323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.061341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.061477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.061494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.061764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.061781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.062042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.062087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.062366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.062399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.062713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.062746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.063044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.063418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.063647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.063680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.063985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.064017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.064286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.064318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.064586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.064604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.064917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.064934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.065250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.065284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.065538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.065570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.065762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.065794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.066135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.066168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.066450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.066484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.066664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.066698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.043 [2024-07-15 22:05:07.066948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.043 [2024-07-15 22:05:07.066980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.043 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.067274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.067313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.067583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.067600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.067813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.067829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.068026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.068043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.068258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.068275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.068443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.068460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.068726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.068758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.068928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.069217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.069482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.069499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.069755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.069787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.070041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.070073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.070333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.070366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.070617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.070633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.070974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.071008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.071273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.071307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.071605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.071622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.071839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.071856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.072145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.072163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.072360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.072377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.072586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.072604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.072766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.072783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.072997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.073014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.073309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.073342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.073598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.073631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.073973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.074006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.074335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.074367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.074568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.074614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.074852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.074867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.075088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.075105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.075405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.075423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.075659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.075676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.075891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.075908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.076254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.076539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.076574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.076885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.076919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.077242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.077277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.077466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.077483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.077623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.077639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.077855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.077873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.078098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.078120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.078420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.078454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.078650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.078683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.078960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.078992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.079307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.079342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.079540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.079574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.079814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.079847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.080183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.080215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.080539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.080557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.080707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.080724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.081025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.081057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.081322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.081355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.081605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.081623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.081799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.081832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.082075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.082109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.082395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.082412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.082574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.082591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.082816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.082849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.083088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.083121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.083323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.083340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.083603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.083620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.083780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.083797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.084050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.084083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.084318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.084351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.084663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.084680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.084979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.044 [2024-07-15 22:05:07.084997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.044 qpair failed and we were unable to recover it. 00:27:13.044 [2024-07-15 22:05:07.085254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.085287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.085542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.085575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.085907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.085939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.086126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.086159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.086520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.086553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.086860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.086892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.087208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.087249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.087492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.087841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.087874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.088148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.088461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.088495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.088801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.088834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.089149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.089182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.089480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.089512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.089766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.089803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.089976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.090008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.090344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.090377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.090627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.090659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.090912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.090944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.091183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.091215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.091567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.091599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.091923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.091955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.092259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.092292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.092534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.092567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.092925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.093256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.093289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.093543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.093575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.093810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.093842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.094083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.094116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.094375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.094393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.094595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.094628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.094929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.094961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.095294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.095326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.095611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.095643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.095889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.095922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.096249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.096284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.096536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.096553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.096857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.096890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.097140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.097173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.097420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.097437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.097655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.097672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.097878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.097911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.098155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.098188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.098477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.098494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.098764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.098782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.099084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.099116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.099452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.099745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.099777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.100111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.100145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.100399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.100432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.100745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.100778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.101107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.101139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.101395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.101412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.101623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.101655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.101918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.101955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.102208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.102249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.102530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.102562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.102818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.102851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.103154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.103186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.103536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.103568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.103745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.103777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.104125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.104157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.104509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.104542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.104801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.104834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.105187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.105220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.105429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.105461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.105658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.105928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.105960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.106218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.106240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.106411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.106428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.106642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.106659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.106904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.106921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.107245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.107279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.107559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.107591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-15 22:05:07.107851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-15 22:05:07.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.108246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.108279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.108492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.108532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.108831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.108864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.109136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.109168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.109495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.109529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.109803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.109836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.110176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.110208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.110523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.110555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.110865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.110897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.111207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.111249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.111557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.111590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.111839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.111871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.112238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.112270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.112518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.112535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.112661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.112679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.112901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.112934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.113187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.113220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.113494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.113527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.113761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.113792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.114079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.114115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.114405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.114423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.114711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.114743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.115054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.115087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.115338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.115371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.115655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.115688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.116000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.116032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.116378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.116411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.116592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.116609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.116813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.116845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.117184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.117217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.117531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.117563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.117807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.117839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.118206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.118248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.118559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.118592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.118846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.118878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.119155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.119188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.119468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.119501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.119759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.119791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.120103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.120417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.120451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.120649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.120681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.120927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.120961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.121196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.121237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.121485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.121502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.121705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.121722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.121972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.122008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.122416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.122496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.122763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.122781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.123055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.123073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.123217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.123244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.123484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.123501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.123787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.123820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.124127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.124160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.124488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.124523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.124832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.124864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.125173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.125206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.125492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.125526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.125780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.125813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.126093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.126134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.126402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.126425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.126694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.126729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.127039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.127071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.127263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.127302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.127600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.127633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.127990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.128023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.128354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.128388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.128708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.128725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.129028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.129046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.129349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.129384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-15 22:05:07.129660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-15 22:05:07.129694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.129880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.130215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.130260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.130492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.130510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.130751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.130769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.130996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.131013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.131234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.131251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.131541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.131557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.131762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.131780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.132067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.132085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.132290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.132583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.132617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.132871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.132903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.133138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.133171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.133522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.133556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.133816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.133847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.134078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.134376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.134663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.134681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.134923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.134955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.135209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.135250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.135571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.135588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.135806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.135823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.136031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.136048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.136258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.136275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.136471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.136490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.136650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.136667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.136973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.137006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.137199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.137243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.137568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.137586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.137806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.137825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.138125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.138143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.138377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.138396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.138672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.138690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.138928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.138945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.139238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.139256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.139437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.139455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.139671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.139688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.139921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.139938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.140140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.140157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.140343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.140360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.140625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.140642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.140792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.140810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.141053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.141249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.141267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.141481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.141499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.141650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.141668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.141817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.141834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.142034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.142051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.142330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.142348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.142587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.142604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.142819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.142836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.143099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.143117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.143277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.143294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.143456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.143473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.143629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.143645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.143855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.143872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.144130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.144170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.144399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.144415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.144609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.144624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.144784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.144799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.145077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.145090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.145275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.145289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.145505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.145519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.145673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.145687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.145914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.145928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.146059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.146073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.146259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.146273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.146428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.146443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.146748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.146762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.147006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.147025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.147220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-15 22:05:07.147239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-15 22:05:07.147462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.147476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.147684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.147697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.147848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.147861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.148017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.148030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.148270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.148284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.148517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.148530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.148718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.148733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.148959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.148973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.149170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.149183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.149395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.149408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.149608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.149621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.149816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.149830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.150139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.150152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.150355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.150368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.150566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.150580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.150766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.150780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.150909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.150923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.151095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.151259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.151273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.151562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.151577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.151785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.151798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.151921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.151934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.152245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.152258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.152410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.152423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.152628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.152641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.152853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.152867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.153063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.153076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.153344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.153358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.153501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.153514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.153793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.153806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.154057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.154069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.154289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.154302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.154437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.154451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.154570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.154585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.154803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.154818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.155033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.155299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.155312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.155520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.155534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.155697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.155713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.155949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.156244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.156257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.156500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.156512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.156657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.156670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.156802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.156815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.157028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.157041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.157234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.157248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.157458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.157471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.157635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.157648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.157975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.157988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.158191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.158205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.158508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.158522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.158667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.158680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.158808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.158820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.159098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.159112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.159310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.159324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.159486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.159498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.159617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.159631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.159838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.159851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.160110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.160123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.160332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.160352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.160635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.160649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.160925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.161096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.161110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.161323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.161336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.161614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.161627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.161817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.161830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.162033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.162046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.162307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.162319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.162473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.162487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.162695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.162708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.162911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.162925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.163114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.163128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.163322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.163335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.163491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.163504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.163657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.163670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.163954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.163968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.164231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.164246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.164384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-15 22:05:07.164397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-15 22:05:07.164654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.164669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.164945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.164958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.165105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.165117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.165399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.165414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.165573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.165587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.165788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.165801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.166086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.166099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.166238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.166250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.166411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.166424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.166573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.166586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.166727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.166740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.167000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.167014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.167270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.167284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.167471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.167484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.167768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.167781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.167912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.167926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.168201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.168213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.168457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.168474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.168660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.168673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.168859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.168873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.169097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.169110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.169311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.169324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.169575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.169587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.169862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.169875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.170059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.170072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.170343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.170356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.170654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.170667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.170940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.170954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.171189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.171201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.171437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.171450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.171684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.171698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.171969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.171981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.172120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.172132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.172369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.172383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.172607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.172620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.172874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.172888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.173107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.173120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.173327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.173340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.173535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.173548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.173701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.173714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.174013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.174026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.174258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.174271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.174431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.174443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.174583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.174596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.174830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.175111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.175125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.175399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.175412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.175637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.175650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.175798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.175810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.176101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.176114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.176312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.176325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.176521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.176534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.176738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.176752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.177044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.177056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.177253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.177266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.177515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.177528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.177680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.177693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.178001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.178014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.178206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.178219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.178421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.178434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.178594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.178607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.178806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.178819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.179121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.179133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.179408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.179420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.179544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.179557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.179747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.179760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.179894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.179906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.180160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.180175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.180395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.180407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-15 22:05:07.180542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-15 22:05:07.180555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.180749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.180762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.181040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.181054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.181332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.181346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.181648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.181662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.181790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.181803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.182039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.182052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.182364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.182376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.182576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.182589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.182876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.182889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.183077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.183091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.183350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.183363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.183567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.183601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.183851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.183883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.184048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.184083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.184328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.184360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.184620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.184652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.184949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.184980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.185278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.185310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.185496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.185528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.185720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.185752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.186015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.186047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.186274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.186308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.186482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.186498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.186690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.186722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.186913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.186946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.187327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.187360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.187557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.187592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.187836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.187868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.188166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.188198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.188408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.188689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.188721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.188899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.190049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.190387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.190556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.190570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.190824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.190857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.191110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.191142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.191398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.191439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.191620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.191655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.191956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.191971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.192220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.192250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.192526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.192558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.192751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.192783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.193088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.193120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.193314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.193346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.193644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.193673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.193799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.193811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.194948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.194960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.195149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.195161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.195297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.195311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.195518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.195531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.195684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.195716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.195997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.196029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.196372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.196386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.196596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.196629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.196807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.196840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.197091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.197124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.197371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.197385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.197649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.197681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.197956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.197989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.198321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.198354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.198676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.198709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.199062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.199096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.199350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.199383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.199569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.199583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.199837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.199869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.200176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.200207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.200430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.200463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.200633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.200665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.200857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.200888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.201065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.201096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.201325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.201356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-15 22:05:07.201559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-15 22:05:07.201597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.201848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.201881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.202067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.202099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.202378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.202411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.202577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.202590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.202725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.202758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.203156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.203187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.203444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.203476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.203660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.203692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.203894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.203926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.204222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.204263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.204451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.204482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.204653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.204685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.205043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.205074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.205340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.205373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.205563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.205595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.205843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.205875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.206131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.206381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.206414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.206614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.206646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.206840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.206872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.207192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.207232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.207414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.207446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.207669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.207701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.208076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.208108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.208390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.208423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.208643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.208788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.209092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.209123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.209409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.209442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.209631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.209920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.209952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.210267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.210300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.210552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.210565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.210707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.210720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.210876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.210889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.211084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.211096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.211310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.211324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.211479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.211503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.211650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.211662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.211807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.211823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.212027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.212040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.212294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.212307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.212453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.212465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.212668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.212680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.212992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.213210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.213372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.213571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.213724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.213866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.213879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.214093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.214106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.214376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.214389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.214531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.214545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.214684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.214696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.214900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.214913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.215163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.215175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.215384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.215397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.215594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.215607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.215860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.215873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.216059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.216072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.216340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.216353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.216507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.216519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.216669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.216701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.216944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.217218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.217261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.217639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.217671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.217868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.217900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.218146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.218374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.218407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.218602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.218634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-15 22:05:07.218883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-15 22:05:07.218916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.219159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.219191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.219443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.219456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.219659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.219692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.219963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.219995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.220323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.220356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.220546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.220578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.220830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.220862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.221108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.221139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.221454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.221492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.221750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.221783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.222055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.222099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.222442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.222474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.222668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.222700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.222879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.222910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.223159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.223191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.223431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.223464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.223703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.223735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.223878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.223890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.224192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.224223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.224419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.224452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.224626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.224657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.224916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.224948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.225201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.225244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.225480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.225512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.225757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.225789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.226049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.226080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.226358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.226391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.226585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.226617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.226864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.226897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.227209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.227249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.227435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.227467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.227721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.227753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.228085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.228097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.228279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.228292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.228503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.228536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.228808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.228840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.229104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.229138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.229459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.229492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.229739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.229771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.230050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.230082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.230394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.230427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.230739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.230771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.231092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.231125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.231357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.231390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.231638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.231671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.231912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.231943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.232134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.232167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.232361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.232393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.232623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.232660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.232900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.232933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.233159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.233191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.233452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.233484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.233689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.233720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.233960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.233991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.234288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.234320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.234514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.234546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.234739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.234771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.235106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.235138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.235441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.235473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.235660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.235692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.235925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.235938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.236131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.236480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.236513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.236779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.236810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.237174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.237528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.237541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.237743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.237755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.237969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.237981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.238094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.238107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.238451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.238485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.238814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.238847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.239083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.239115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.239413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.239450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.239588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.239601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.239812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.239845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.240120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.240152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.240482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.240494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-15 22:05:07.240613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-15 22:05:07.240627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.240769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.240801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.241035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.241067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.241243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.241276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.241644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.241676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.241852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.241884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.242149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.242180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.242410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.242684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.242716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.242967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.242980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.243173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.243188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.243380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.243396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.243558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.243589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.243773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.243804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.244126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.244157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.244351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.244384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.244668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.244700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.244892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.244924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.245173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.245205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.245414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.245446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.245617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.245648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.245933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.245945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.246257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.246290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.246454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.246486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.246733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.246766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.246925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.246937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.247168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.247419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.247451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.247700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.247732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.248026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.248057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.248296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.248329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.248567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.248599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.248896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.249198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.249237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.249489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.249717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.249749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.250005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.250018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.250293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.250326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.250697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.250770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.251024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.251042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.251197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.251240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.251536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.251569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.251769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.251800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.252107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.252139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.252372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.252412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.252611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.252627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.252839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.252855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.253147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.253179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.253441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.253476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.253717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.253750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.254007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.254039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.254378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.254412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.254683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.254716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.254899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.254916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.255166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.255198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.255507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.255540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.255869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.255902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.256148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.256179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.256355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.256388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.256686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.256728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.256971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.257002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.257361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.257393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.257653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.257684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.257950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.257966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.258237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.258254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.258414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.258431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.258606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.258639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.258883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.258915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.259144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.259176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.259510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.259543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.259729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.259762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.260052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.260085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-15 22:05:07.260418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-15 22:05:07.260450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.260720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.260753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.261116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.261380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.261412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.261592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.261625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.261811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.261844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.262165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.262196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.262515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.262548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.262845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.262878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.263193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.263233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.263463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.263494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.263783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.263815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.264134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.264166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.264496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.264529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.264819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.264864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.265011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.265028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.265262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.265295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.265684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.265724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.265883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.265899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.266116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.266131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.266409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.266568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.266584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.266783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.266799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.267018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.267050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.267434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.267467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.267717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.267733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.267939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.267955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.268246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.268280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.268546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.268577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.268735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.268751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.268966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.268983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-15 22:05:07.269184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-15 22:05:07.269216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.269552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.269585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.269763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.269796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.270156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.270188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.270485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.270519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.270716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.270748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.270998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.271030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.271271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.271305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.271498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.271531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.271806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.271838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.272024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.272043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.272243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.272285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.272540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.272572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.272821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.272837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.273115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.273146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 22:05:07.273379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.332 [2024-07-15 22:05:07.273412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.273658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.273696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.273872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.273905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.274093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.274125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.274351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.274384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.274625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.274641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.274921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.274954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.275284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.275317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.275497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.275816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.275847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.276142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.276174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.276489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.276522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.276780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.276812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.277105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.277139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.277422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.277454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.278532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.278562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.278737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.278753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.279038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.279069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.279418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.279458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.279715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.279731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.280026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.280042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.280303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.280319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.280531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.280546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.280775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.280807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.281056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.281088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.281396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.281447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.281649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.281666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.281789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.281806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.282009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.282025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.282175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.282192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.282390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.282407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.282612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.282628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 22:05:07.282827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.333 [2024-07-15 22:05:07.282860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.283156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.283188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.283371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.283405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.283626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.283658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.283955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.283986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.284305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.284338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.284611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.284643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.284917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.284949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.285276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.285309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.285491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.285523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.285769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.285802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.286115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.286131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.286356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.286389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.286641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.286673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.286986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.287018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.287330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.287362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.287601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.287632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.287886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.287917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.288148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.288180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.288372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.288405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.288699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.288715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.288913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.288928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.289137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.289153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.289417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.289450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.289628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.289660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.290003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.290035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.290265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.290298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.290500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.290532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.290828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.290860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.291173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.291205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.291388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.291420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.291607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.291623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.291908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.291939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.292260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.334 [2024-07-15 22:05:07.292293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 22:05:07.292626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.292658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.292889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.292921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.293185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.293216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.293485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.293523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.293803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.293820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.294148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.294180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.294444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.294478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.294746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.294777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.295099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.295131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.295469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.295502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.295795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.295826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.296052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.296083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.296319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.296352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.296675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.296707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.296884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.296916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.297165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.297197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.297455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.297488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.297787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.297819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.298091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.298108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.298404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.298436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.298692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.298725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.298958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.298991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.299314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.299347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.299543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.299575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.299815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.299848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.300021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.300053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.300278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.300310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.300647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.300679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.300859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.300892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.301064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.301079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.301343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.301381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.301629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.301660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.301994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.335 [2024-07-15 22:05:07.302026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.335 qpair failed and we were unable to recover it. 00:27:13.335 [2024-07-15 22:05:07.302324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.302357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.302554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.302597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.302747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.302763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.303039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.303055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.303339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.303356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.303508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.303543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.303857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.303888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.304204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.304242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.304541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.304573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.304809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.304841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.305217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.305256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.305602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.305634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.305898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.305930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.306182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.306213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.306469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.306500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.306757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.306773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.307083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.307115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.307355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.307387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.307707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.307740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.307984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.308016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.308338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.308371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.308615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.308648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.308942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.308974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.309291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.309324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.309598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.309634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.309807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.309839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.310195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.310252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.310485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.310517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.310873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.310906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.311142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.311174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.311383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.336 [2024-07-15 22:05:07.311416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.336 qpair failed and we were unable to recover it. 00:27:13.336 [2024-07-15 22:05:07.311710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.311741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.312008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.312313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.312346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.312645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.312676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.312976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.313328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.313361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.313678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.313710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.314033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.314066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.314297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.314330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.314663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.314695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.314965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.314981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.315283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.315300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.315463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.315480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.315692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.315723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.316047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.316079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.316323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.316357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.316604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.316637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.316983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.317015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.317344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.317376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.317647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.317679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.317906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.317923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.318114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.318130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.318394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.318431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.318604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.318636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.318961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.318992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.319235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.319252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.319559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.337 [2024-07-15 22:05:07.319575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.337 qpair failed and we were unable to recover it. 00:27:13.337 [2024-07-15 22:05:07.319809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.319826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.320089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.320136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.320372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.320404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.320651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.320683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.320983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.321016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.321328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.321607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.321639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.321942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.321974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.322209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.322251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.322484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.322516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.322853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.322885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.323146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.323178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.323427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.323460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.323707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.323739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.324032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.324049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.324269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.324286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.324446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.324463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.324724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.324740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.324954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.324971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.325235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.325273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.325572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.325605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.325869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.325902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.326267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.326300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.326626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.326658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.327008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.327039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.338 [2024-07-15 22:05:07.327360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.338 [2024-07-15 22:05:07.327392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.338 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.327689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.327722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.328040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.328072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.328319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.328352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.328577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.328610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.328840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.328872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.329190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.329222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.329472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.329505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.329738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.329769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.330035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.330072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.330322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.330356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.330656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.330689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.330956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.330988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.331248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.331281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.331599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.331933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.331965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.332281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.332621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.332883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.332900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.333205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.333246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.333479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.333513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.333745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.333777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.334007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.334038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.334357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.334390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.334734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.334766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.334992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.335009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.335279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.335313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.335633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.335666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.335968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.336001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.336316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.336349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.336640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.336672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.336919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.336936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.339 [2024-07-15 22:05:07.337131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.339 [2024-07-15 22:05:07.337148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.339 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.337365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.337399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.337718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.337750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.337986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.338018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.338269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.338291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.338587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.338618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.338958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.338991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.339305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.339323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.339636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.339653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.339927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.339960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.340295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.340329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.340601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.340634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.340873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.340906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.341246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.341284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.341616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.341649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.341954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.341970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.342193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.342210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.342419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.342436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.342666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.342683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.342983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.343014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.343360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.343395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.343720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.343753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.344088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.344121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.344419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.344452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.344718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.344763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.344976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.344994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.345284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.345302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.345592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.345624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.345959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.345991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.346310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.346327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.346615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.346633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.346923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.346955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.347262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.347296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.347604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-15 22:05:07.347637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-15 22:05:07.347944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.347976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.348254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.348287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.348616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.348649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.348978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.349010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.349329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.349362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.349530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.349563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.349790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.349833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.350035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.350052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.350274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.350292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.350601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.350633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.350908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.350940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.351199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.351245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.351582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.351615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.351934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.351967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.352271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.352304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.352539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.352571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.352808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.352841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.353160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.353176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.353400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.353417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.353637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.353671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.353934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.353965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.354260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.354278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.354512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.354529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.354743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.354760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.355078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.355095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.355363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.355396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.355750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.355782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.356083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.356115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.356364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.356398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.356730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.356762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.357079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.357113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.357439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.357472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-15 22:05:07.357656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-15 22:05:07.357689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.357915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.358257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.358291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.358542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.358575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.358803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.358836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.359114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.359150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.359421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.359460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.359789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.359822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.360124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.360156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.360390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.360442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.360805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.360838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.361187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.361220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.361558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.361591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.361827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.361844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.362070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.362087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.362246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.362265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.362474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.362491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.362691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.362724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.363006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.363038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.363275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.363307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.363561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.363593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.363757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.363789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.364018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.364051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.364376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.364409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.364749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.364781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.365110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.365143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.365472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.365504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.365829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.365863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.366189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.366222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.366562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.366595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.366862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.366895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.367057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.367098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-15 22:05:07.367391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-15 22:05:07.367424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.367740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.368100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.368117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.368312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.368339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.368634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.368667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.368826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.368859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.369208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.369249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.369576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.369609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.369842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.369874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.370051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.370083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.370403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.370436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.370681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.370714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.371044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.371076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.371396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.371429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.371661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.371693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.371893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.371926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.372199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.372215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.372547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.372579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.372815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.372848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.373172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.373204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.373473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.373506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.373855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.373888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.374210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.374254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.374540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.374572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-15 22:05:07.374847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-15 22:05:07.374880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.375236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.375270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.375532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.375564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.375866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.375899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.376246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.376285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.376519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.376552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.376855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.376887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.377213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.377257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.377562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.377593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.377859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.377891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.378205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.378253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.378585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.378618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.378870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.378902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.379145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.379176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.379485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.379518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.379851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.379884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.380137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.380168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.380505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.380539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.380868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.380902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.381153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.381185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.381504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.381537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.381864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.381896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.382131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.382147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.382441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.382477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.382799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.382831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.383154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.383186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.383501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.383534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.383857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.383890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.384127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.384159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.384492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.384526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.384828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.384860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.385181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.385214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.385519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.385551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.385804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.385837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.386097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-15 22:05:07.386129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-15 22:05:07.386363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.386397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.386729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.387083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.387116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.387421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.387454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.387796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.387829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.388149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.388182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.388527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.388560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.388830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.388863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.389187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.389220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.389534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.389567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.389876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.389909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.390264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.390297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.390602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.390634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.390956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.391001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.391305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.391338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.391573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.391605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.391909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.391941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.392266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.392299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.392541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.392573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.392897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.392930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.393243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.393276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.393456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.393488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.393812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.393845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.394122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.394155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.394500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.394533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.394869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.394901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.395131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.395164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.395449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.395482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.395648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.395681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.396015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.396048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.396313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.396347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.396699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-15 22:05:07.396731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-15 22:05:07.397096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.397128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.397455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.397488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.397787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.397820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.398141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.398173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.398497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.398530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.398855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.398893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.399173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.399205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.399458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.399491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.399722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.399753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.400057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.400089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.400409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.400426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.400697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.400728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.401068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.401101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.401376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.401410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.401660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.401693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.402035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.402071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.402337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.402370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.402671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.402704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.403050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.403082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.403340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.403374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.403727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.403760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.404069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.404101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.404332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.404367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.404694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.404726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.404958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.404989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.405317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.405350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.405683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.406046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.406079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.406379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.406412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.406732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.406765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.407094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.407127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.407356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.407373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.407602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-15 22:05:07.407640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-15 22:05:07.407968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.408001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.408232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.408448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.408481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.408829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.408860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.409110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.409127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.409435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.409469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.409702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.409734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.410046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.410078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.410382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.410414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.410731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.410763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.410995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.411027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.411274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.411293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.411582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.411599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.411827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.411861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.412115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.412147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.412473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.412507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.412808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.412840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.413110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.413142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.413388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.413423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.413653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.413990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.414023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.414357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.414391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.414716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.414749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.415050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.415082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.415408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.415442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.415768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.415800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.416129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.416161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.416476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.416510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.416756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.416788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.417117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.417149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.417475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.417508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.417748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.417780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.418113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.418145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.418450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.418484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-15 22:05:07.418815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-15 22:05:07.418848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.419100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.419132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.419367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.419400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.419703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.419735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.420091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.420123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.420460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.420493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.420772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.420806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.421064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.421448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.421481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.421812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.421845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.422148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.422182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.422426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.422459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.422731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.422773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.423089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.423120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.423447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.423480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.423810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.423843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.424073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.424105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.424433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.424450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.424750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.424783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.425035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.425371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.425405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.425711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.425744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.426053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.426100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.426384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.426774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.426806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.427040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.427073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.427305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.427338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.427653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.427686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.428030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-15 22:05:07.428063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-15 22:05:07.428388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.428422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.428675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.428708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.429057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.429090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.429406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.429423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.429698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.429736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.429989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.430022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.430270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.430287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.430499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.430516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.430788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.430821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.431121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.431138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.431407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.431425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.431641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.431659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.431957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.431990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.432244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.432277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.432578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.432610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.432883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.432915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.433092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.433125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.433459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.433494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.433824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.433856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.434178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.434195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.434531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.434565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.434797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.434829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.435160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.435193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.435511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.435544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.435843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.435875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.436198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.436241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.436540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.436572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.436812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.436845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.437119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.437151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.437484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.437517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.437846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-15 22:05:07.437879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-15 22:05:07.438113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.438152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.438477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.438510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.438690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.438722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.439042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.439369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.439628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.439661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.439973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.440006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.440259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.440292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.440643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.440676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.440931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.440964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.441317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.441356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.441695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.441728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.441970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.441987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.442287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.442681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.442714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.442962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.442994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.443319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.443352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.443679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.443712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.444041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.444075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.444396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.444430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.444740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.444773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.445049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.445081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.445343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.445376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.445732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.445765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.446092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.446134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.446410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.446428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.446711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.446743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.447088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.447126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.447424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.447442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.447677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.447710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.447891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.447923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-15 22:05:07.448254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-15 22:05:07.448302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.448501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.448519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.448744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.448762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.449040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.449058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.449338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.449372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.449626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.449659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.450009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.450042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.450331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.450349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.450646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.450678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.450911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.450943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.451340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.451423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.451781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.451817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.452128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.452479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.452497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.452793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.452810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.453019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.453036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.453271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.453288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.453566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.453583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.453851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.453898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.454233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.454267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.454500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.454532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.454903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.454935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.455261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.455295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.455595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.455637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.455945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.455977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.456283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.456317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.456660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.456692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.457016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.457057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.457342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.457359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.457611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-15 22:05:07.457628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-15 22:05:07.457907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.457940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.458291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.458324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.458587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.458604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.458822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.458838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.459053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.459070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.459389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.459422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.459784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.459816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.460121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.460154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.460469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.460502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.460808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.460840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.461160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.461192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.461501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.461536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.461845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.461878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.462217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.462256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.462582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.462615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.462893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.462925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.463118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.463150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.463477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.463510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.463820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.464155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.464172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.464500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.464538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.464791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.464824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.465060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.465092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.465424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.465458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.465662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.465695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.465943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.465985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.466252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.466269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.466542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.466575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.466891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.466924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.467240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.467275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.467576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.467609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.467918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-15 22:05:07.467951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-15 22:05:07.468211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.468252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.468599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.468631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.468967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.468999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.469241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.469276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.469537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.469553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.469851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.469883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.470238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.470272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.470537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.470570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.470809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.470842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.471093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.471110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.471250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.471267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.471562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.471595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.471771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.471803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.472130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.472162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.472518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.472551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.472880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.472917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.473168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.473201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.473556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.473589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.473820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.473852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.474177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.474193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.474409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.474425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.474743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.474775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.474946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.475297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.475331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.475564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.475596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.475861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.475894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.476068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.476100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.476422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.476456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.476782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.476814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.476999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.477016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.477223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.477264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.477566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-15 22:05:07.477598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-15 22:05:07.477828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.477861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.478105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.478122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.478360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.478378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.478677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.478710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.478961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.478993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.479263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.479297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.479529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.479561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.479895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.479927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.480249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.480283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.480579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.480611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.480934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.480986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.481201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.481219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.481468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.481485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.481752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.481768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.481985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.482293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.482326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.482582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.482615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.482962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.482994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.483323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.483356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.483682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.483716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.484042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-15 22:05:07.484076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-15 22:05:07.484355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.484388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.484639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.484672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.484994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.485037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.485208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.485231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.485438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.485455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.485742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.485760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.486049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.486066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.486283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.486328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.486649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.486680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.487005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.487037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.487340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.487373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.487617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.487649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.487888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.487922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.488153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.488185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.488447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.488481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.488651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.488683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.489010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.489047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.489304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.489336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.489686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.489719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.490066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.490098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.490423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.490441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.490738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.490770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.491117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.491149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.491471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.491505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.491807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.492167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.492200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.492512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.492544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.492776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.492810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.493077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.493109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-15 22:05:07.493428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-15 22:05:07.493446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.493749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.493782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.494118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.494150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.494403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.494421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.494697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.494714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.494910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.494927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.495235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.495269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.495590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.495622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.495950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.495983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.496306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.496340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.496663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.496696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.497022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.497055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.497372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.497405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.497637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.497671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.497933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.497965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.498270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.498304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.498548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.498564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.498796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.498812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.499048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.499066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.499365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.499382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.499582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.499599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.499883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.499899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.500098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.500116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.500409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.500426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.500636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.500653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.500867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.500900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.501159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.501192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.501437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.501455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.501678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.501711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-15 22:05:07.502013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-15 22:05:07.502045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.502307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.502324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.502545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.502563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.502775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.502792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.503062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.503079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.503240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.503258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.503487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.503520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.503862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.503894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.504143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.504175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.504376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.504416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.504618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.504636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.504832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.504849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.505051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.505068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.505284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.505318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.505648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.505680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.505913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.505946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.506273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.506308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.506576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.506608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.506860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.506893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.507188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.507205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.507518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.507552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.507799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.507832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.508079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.508112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.508395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.508430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.508662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.508694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.509025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.509058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.509332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.509371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.509712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.509744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.510069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.510389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.510422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.510754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.510786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.511030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.511062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.511321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.511353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.511655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-15 22:05:07.511687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-15 22:05:07.511868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.511900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.512223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.512264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.512539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.512572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.512874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.512907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.513235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.513268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.513518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.513534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.513834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.513867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.514203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.514259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.514494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.514526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.514778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.514809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.515105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.515137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.515459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.515477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.515773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.515806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.515994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.516027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.516189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.516206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.516424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.516457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.516702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.516734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.516977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.517009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.517276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.517309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.517490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.517527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.517792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.517824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.518151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.518184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.518491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.518508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.518822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.518854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.519181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.519213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.519545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.519578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.519902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.519934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.520248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.520281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.520605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.520637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.520961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.520993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.521295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.521649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.521682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-15 22:05:07.521959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-15 22:05:07.521992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.522241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.522275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.522550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.522567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.522802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.522818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.523084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.523101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.523321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.523538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.523557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.523712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.523728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.523957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.523989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.524257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.524291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.524613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.524630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.524872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.524890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.525154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.525171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.525410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.525427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.525774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.525807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.526138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.526171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.526481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.526514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.526746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.526779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.527014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.527047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.527294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.527327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.527582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.527615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.527939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.527971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.528236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.528271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.528597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.528629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.528861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.528894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.529239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.529272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.529503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.529535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.529863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.529897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.530239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.530274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.530609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.530642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.530805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.530838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.531069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.531102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.531379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.531413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.531647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.531664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-15 22:05:07.532005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-15 22:05:07.532037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.532379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.532413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.532729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.532761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.533019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.533051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.533401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.533434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.533701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.533718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.534015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.534047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.534293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.534327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.534564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.534944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.534977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.535293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.535327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.535571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.535603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.535871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.535904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.536243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.536277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.536572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.536811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.536829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.537138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.537155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.537360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.537401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.537704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.537736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.538045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.538077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.538390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.538424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.538726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.539049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.539082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.539332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.539365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.539621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.539653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.539998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.540337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.540369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.540628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-15 22:05:07.540660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-15 22:05:07.540930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.540961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.541144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.541184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.541490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.541523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.541864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.541895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.542200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.542241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.542567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.542600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.542838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.542871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.543267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.543300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.543608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.543640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.543969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.544001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.544247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.544280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.544583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.544616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.544959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.544992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.545315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.545349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.545508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.545525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.545754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.545771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.546001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.546034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.546333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.546366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.546678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.546711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.547022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.547055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.547296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.547539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.547556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.547773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.547805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.548039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.548071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.548336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.548354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.548554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.548571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.548853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.548870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.549168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.549201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.549406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.549439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.549690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.549724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.550070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.550103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-15 22:05:07.550370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-15 22:05:07.550387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.550678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.550961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.550994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.551325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.551359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.551684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.551717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.552049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.552376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.552394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.552661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.552704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.553012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.553044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.553279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.553312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.553564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.553597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.553854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.553887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.554190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.554223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.554531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.554548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.554858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.554874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.555146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.555192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.555530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.555568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.555872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.555905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.556236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.556270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.556527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.556559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.556821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.556854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.557107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.557140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.557395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.557413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.557709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.557742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.558048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.558080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.558359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.558401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.558756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.558773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.558986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.559003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.559291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.559325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.559606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.559638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.559949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.559982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.560246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.560280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-15 22:05:07.560516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-15 22:05:07.560548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.560864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.561215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.561260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.561505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.561539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.561875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.561908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.562167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.562199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.562531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.562564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.562889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.562922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.563152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.563184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.563551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.563569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.563739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.563771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.564077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.564108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.564447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.564464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.564780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.564813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.565091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.565124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.565382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.565415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.565684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.565702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.565902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.566209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.566231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.566427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.566444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.566740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.566773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.567102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.567135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.567362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.567380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.567675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.567708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.568036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.568068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.568393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.635 [2024-07-15 22:05:07.568411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.635 qpair failed and we were unable to recover it. 00:27:13.635 [2024-07-15 22:05:07.568645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.568663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.568869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.568885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.569016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.569034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.569332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.569350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.569690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.569707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.569948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.569979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.570324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.570357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.570591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.570624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.570953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.570985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.571317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.571350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.571671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.571688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.571972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.572006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.572286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.572320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.572678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.572712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.573038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.573071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.573366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.573400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.573675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.573708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.573960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.573993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.574346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.574379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.574648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.574681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.575014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.575046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.575367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.575385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.575587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.575604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.575801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.575818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.576111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.576143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.576518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.576551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.576873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.576910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.577242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.577275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.577595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.577627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.577790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.577822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.578054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.578087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.578415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.578448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.578704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.578736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.579058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.579091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.579389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.579423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.579744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.579777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.580020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.580052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.580305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.580338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.580576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.580608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.580934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.580966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.581256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.636 [2024-07-15 22:05:07.581290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.636 qpair failed and we were unable to recover it. 00:27:13.636 [2024-07-15 22:05:07.581557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.581590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.581875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.581912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.582244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.582277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.582509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.582541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.582833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.582864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.583095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.583128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.583396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.583429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.583702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.583720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.584012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.584030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.584237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.584254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.584546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.584579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.584910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.584942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.585248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.585287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.585610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.585627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.585829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.585845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.586143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.586176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.586502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.586535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.586854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.586871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.587079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.587097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.587320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.587338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.587662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.587679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.587949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.587983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.588303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.588336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.588597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.588629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.588874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.588906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.589143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.589175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.589564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.589598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.589926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.589958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.590260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.590293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.590619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.590637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.590937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.590954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.591085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.591102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.591452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.591485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.591736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.591768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.592023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.592055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.592403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.592437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.592734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.592751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.593068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.593113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.593420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.593453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.593686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.593718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.593919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.637 [2024-07-15 22:05:07.593952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.637 qpair failed and we were unable to recover it. 00:27:13.637 [2024-07-15 22:05:07.594183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.594215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.594576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.594608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.594857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.594889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.595214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.595265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.595585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.595602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.595896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.595913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.596219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.596263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.596605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.596638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.596957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.596973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.597182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.597199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.597417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.597434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.597651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.597667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.597953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.597970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.598265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.598299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.598604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.598637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.598898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.598931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.599261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.599294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.599522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.599539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.599825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.599858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.600210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.600252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.600580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.600597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.600897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.600930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.601267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.601300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.601613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.601631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.601840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.601857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.602154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.602186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.602559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.602593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.602859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.602876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.603111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.603127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.603389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.603406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.603705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.603737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.604079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.604113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.604437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.604471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.604772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.605103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.605120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.605428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.605445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.605713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.605746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.606099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.606132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.606457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.606490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.606769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.606812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.607115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-15 22:05:07.607149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-15 22:05:07.607381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.607414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.607650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.607690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.607898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.607916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.608063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.608080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.608373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.608406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.608733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.608766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.609097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.609130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.609432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.609466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.609779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.609813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.610121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.610154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.610462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.610479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.610769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.610801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.611057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.611091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.611400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.611434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.611668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.611700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.611956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.611989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.612308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.612341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.612622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.612655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.612953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.612986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.613256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.613289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.613561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.613594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.613847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.613880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.614063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.614096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.614334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.614351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.614560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.614577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.614874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.614912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.615099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.615132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.615432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.615466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.615768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.615786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.615990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.616007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.616273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.616290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.616578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.616612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.616881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.616914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.617244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-15 22:05:07.617278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-15 22:05:07.617532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.617566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.617931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.617963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.618270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.618305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.618552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.618569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.618834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.618851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.619142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.619174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.619498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.619543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.619842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.619875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.620185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.620217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.620522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.620539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.620808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.620839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.621074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.621106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.621439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.621473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.621701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.621733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.622073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.622450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.622483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.622758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.622774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.623087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.623121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.623446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.623485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.623788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.624116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.624150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.624467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.624485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.624714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.624731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.624946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.624963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.625260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.625294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.625571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.625603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.625931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.625964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.626269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.626302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.626627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.626660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.626904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.626937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.627192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.627233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.627496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.627529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.627866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.627899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.628154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.628187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.628462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.628496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.628823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.628854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.629145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.629177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.629527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.629560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.629865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.629898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.630148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.630181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.630449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.630482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-15 22:05:07.630724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-15 22:05:07.630757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.631030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.631062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.631391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.631425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.631751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.631783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.632052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.632085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.632436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.632469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.632642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.632686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.632960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.632978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.633180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.633198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.633427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.633445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.633716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.633749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.634072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.634105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.634351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.634385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.634688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.634720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.634967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.634999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.635369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.635402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.635749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.635791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.636117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.636150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.636409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.636444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.636799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.636831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.637174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.637207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.637408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.637442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.637622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.637655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.637848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.637865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.638161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.638194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.638540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.638573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.638869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.638886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.639086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.639372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.639406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.639745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.639779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.640082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.640113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.640432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.640466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.640798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.640830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.641107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.641466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.641505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.641876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.641910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.642149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.642182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.642375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.642410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.642659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.642677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.642959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-15 22:05:07.643324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-15 22:05:07.643358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.643520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.643538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.643834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.643867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.644198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.644562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.644595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.644916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.644954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.645135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.645168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.645439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.645472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.645772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.646112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.646145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.646378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.646412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.646716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.646748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.647035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.647052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.647255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.647273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.647470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.647487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.647784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.647815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.648149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.648182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.648454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.648487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.648833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.648865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.649107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.649140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.649399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.649433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.649737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.649770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.650083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.650115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.650410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.650445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.650769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.650802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.651123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.651156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.651486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.651867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.651899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.652235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.652269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.652586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.652604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.652829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.652847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.653161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.653178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.653491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.653530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.653791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.653824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.654175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.654207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.654544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.654562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.654788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.654821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.655078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.655110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.655461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.655495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.655761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.655794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.656135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.656168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-15 22:05:07.656499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-15 22:05:07.656533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.656840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.656872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.657182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.657214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.657518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.657535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.657824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.657841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.658113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.658130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.658409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.658443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.658799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.658831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.659086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.659119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.659375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.659408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.659584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.659601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.659812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.659845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.660167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.660200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.660548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.660581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.660908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.660940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.661263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.661296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.661591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.661608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.661893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.661926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.662253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.662286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.662542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.662575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.662895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.662929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.663181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.663214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.663448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.663482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.663661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.663678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.663966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.663983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.664304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.664337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.664592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.664624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.664972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.665004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.665263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.665297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.665546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.665579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.665900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.665933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.666241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.666274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.666575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.666593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.666819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.666837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.667120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.667137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.667345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.667363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.667600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.667617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.667833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-15 22:05:07.667851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-15 22:05:07.668150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.668183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.668500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.668535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.668784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.668816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.669069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.669102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.669455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.669488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.669816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.669849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.670171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.670204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.670539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.670572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.670907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.670940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.671244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.671278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.671588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.671621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.671954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.671986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.672301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.672333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.672668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.673027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.673059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.673384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.673419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.673678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.673711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.674062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.674094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.674421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.674455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.674761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.674778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.675096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.675129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.675359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.675398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.675576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.675593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.675809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.675842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.676099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.676132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.676383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.676417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.676602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.676620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.676830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.676863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.677165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.677541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.677574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.677805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.677837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.678103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.678135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.678457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.678492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.678816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.678849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.679102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.679135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.679445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.679799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.680156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.680188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.680556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.680588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.680866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-15 22:05:07.680898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-15 22:05:07.681254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.681590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.681622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.681894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.681926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.682266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.682299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.682575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.682607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.682871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.682903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.683255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.683289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.683546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.683579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.683895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.683934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.684244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.684278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.684585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.684618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.684942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.684974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.685301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.685334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.685633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.685651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.685969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.686002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.686323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.686357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.686674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.686707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.687037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.687070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.687304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.687338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.687666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.687698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.687927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.687960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.688302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.688334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.688667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.688699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.688946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.688978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.689306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.689339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.689607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.689640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.689992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.690025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.690346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.690380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.690697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.690713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.690931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.690948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.691146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.691450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.691467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.691684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.691701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.691975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.692011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.692367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.692400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.692640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.692678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.692980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.693013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.693287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.693321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.693554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.693588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.693896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.693928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-15 22:05:07.694241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-15 22:05:07.694274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.694580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.694612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.694933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.694966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.695297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.695331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.695657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.695690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.695990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.696023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.696342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.696374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.696556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.696588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.696837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.696869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.697245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.697279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.697612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.697644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.697968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.698000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.698324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.698357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.698684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.698715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.699017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.699034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.699260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.699277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.699568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.699601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.699928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.699961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.700192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.700235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.700562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.700595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.700893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.700924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.701103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.701135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.701438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.701472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.701803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.701837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.702158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.702192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.702532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.702565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.702883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.702915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.703158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.703190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.703440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.703474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.703748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.703780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.704022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.704055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.704385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.704418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.704724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.705072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.705105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.705365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.705398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.705707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.705724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.705989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.706006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.706213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.706235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.706537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.706570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.706902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.706934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.707258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.707291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.707522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-15 22:05:07.707555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-15 22:05:07.707881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.707913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.708245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.708279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.708600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.708632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.708888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.708931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.709220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.709277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.709601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.709634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.709822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.709854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.710174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.710206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.710507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.710540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.710859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.710891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.711208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.711251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.711598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.711960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.711992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.712156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.712190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.712432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.712465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.712782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.712800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.713017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.713035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.713321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.713355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.713596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.713629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.713937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.713968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.714237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.714270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.714623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.714660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.714928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.714960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.715314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.715348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.715592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.715625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.715900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.715932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.716257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.716290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.716526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.716558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.716735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.716752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.717066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.717102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.717411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.717445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.717688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.717720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.718027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.718059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.718433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.718466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.718694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.718710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.718986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.719019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.719248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.719282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.719513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.719546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.719852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.719884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.720186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.720203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.720464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-15 22:05:07.720498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-15 22:05:07.720801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.720832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.721151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.721183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.721432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.721466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.721664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.721696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3852271 Killed "${NVMF_APP[@]}" "$@" 00:27:13.648 [2024-07-15 22:05:07.721999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.722017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.722253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.722269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.722468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.722486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 wit 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:13.648 h addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.722817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.722834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.723124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.648 [2024-07-15 22:05:07.723141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.723405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.723422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.723635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.723653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.648 [2024-07-15 22:05:07.723941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.724206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.724223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.724458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.724476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.724677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.724693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.725051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.725068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.725349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.725366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.725641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.725658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.725945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.725965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.726266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.726505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.726523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.726814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.726831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.727113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.727129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.727444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.727461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.727612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.727630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.727894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.727912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.728179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.728407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.728423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.728625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.728642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.728909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.728924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.729215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.729243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.729453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.729470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.729601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.729619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.729887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-15 22:05:07.729903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-15 22:05:07.730096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.730112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.730421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.730438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3852985 00:27:13.649 [2024-07-15 22:05:07.730712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.730730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3852985 00:27:13.649 [2024-07-15 22:05:07.731035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:13.649 [2024-07-15 22:05:07.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.731300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.731318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3852985 ']' 00:27:13.649 [2024-07-15 22:05:07.731550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.731570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.649 [2024-07-15 22:05:07.731861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.731880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.649 [2024-07-15 22:05:07.732017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.732034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.649 [2024-07-15 22:05:07.732249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.732268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.649 [2024-07-15 22:05:07.732533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.732551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 22:05:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.649 [2024-07-15 22:05:07.732818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.732837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.733126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.733143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.733295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.733313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.733621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.733638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.733852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.733870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.734066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.734084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.734349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.734369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.734636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.734654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.734889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.734906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.735114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.735130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.735336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.735353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.735571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.735588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.735741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.735760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.736036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.736053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.736343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.736362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.736575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.736594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.736800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.736817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.737028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.737049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.737266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.737284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.737552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.737569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.737766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.737782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.737995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.738012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.738323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.738341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.738566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.738583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.738934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.738980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.739244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.739265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-15 22:05:07.739478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-15 22:05:07.739496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.739761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.739778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.739916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.739934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.740245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.740262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.740532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.740549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.740816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.740833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.741094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.741112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.741377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.741394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.741690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.741709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.741938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.741955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.742255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.742273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.742482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.742509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.742655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.742672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.742939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.742956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.743220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.743244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.743441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.743459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.743749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.743766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.743989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.744006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.744241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.744259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.744591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.744608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.744848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.744866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.745152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.745169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.745510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.745527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.745793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.745810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.746098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.746115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.746318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.746336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.746530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.746547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.746739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.746755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.747037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.747054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.747323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.747340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.747605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.747622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.747906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.747923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.748132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.748149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.748448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.748466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.748707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.748728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.749004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.749021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.749274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.749292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.749500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.749518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.749681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.749701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.750017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.750034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.750298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.750314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-15 22:05:07.750581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-15 22:05:07.750600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.750893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.750910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.751156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.751173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.751449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.751466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.751661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.751677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.751944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.751961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.752257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.752276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.752437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.752454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.752572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.752589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.752808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.752828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.753035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.753052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.753278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.753294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.753487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.753504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.753643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.753887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.753903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.754191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.754208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.754484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.754501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.754768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.754993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.755010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.755220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.755247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.755514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.755531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.755690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.755707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.755914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.755931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.756152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.756169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.756396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.756747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.756763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.756971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.756987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.757172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.757188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.757456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.757472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.757791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.757808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.757924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.757940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.758149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.758165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.758461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.758477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.758709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.758726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.758983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.758999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.759285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.759301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.759536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.759552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.759840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.759859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.760060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.760076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.760243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.760260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-15 22:05:07.760550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-15 22:05:07.760566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.760832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.761063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.761079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.761269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.761285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.761496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.761512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.761655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.761671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.761955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.761971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.762181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.762198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.762487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.762503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.762715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.762732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.762991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.763007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.763218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.763242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.763391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.763407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.763602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.763618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.763752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.763768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.764051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.764067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.764259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.764275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.764540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.764556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.764837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.764854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.765065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.765080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.765351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.765368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.765701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.765717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.765928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.765944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.766068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.766085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.766386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.766403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.766604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.766620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.766763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.766780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.767061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.767077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.767268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.767284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.767478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.767494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.767774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.767790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.767913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.767945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.768156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.768172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.768444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.768460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.768663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-15 22:05:07.768680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-15 22:05:07.768796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.768812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.769035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.769051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.769357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.769376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.769588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.769603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.769816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.769833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.770046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.770062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.770210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.770493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.770510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.770767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.770783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.771094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.771109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.771218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.771243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.771434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.771450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.771731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.771748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.771891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.771907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.772167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.772183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.772461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.772478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.772622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.772638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.772920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.772936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.773156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.773171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.773397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.773413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.773693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.773857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.773873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.774134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.774149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.774297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.774313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.774507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.774522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.774725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.774741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.775020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.775168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.775452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.775597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.775837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.775991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.776007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.776155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.776171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.776448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.776464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.776675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.776691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.776966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.776982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.777125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.777141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.777399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.777415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.777621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.777637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.777840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.777856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-15 22:05:07.778152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-15 22:05:07.778167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.778303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.778319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.778509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.778529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.778647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.778663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.778871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.778887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.779090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.779106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.779238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.779254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.779407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.779422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.779680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.779696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.779921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.779937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.780051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.780067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.780278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.780294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.780436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.780452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.780647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.780663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.780882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.780897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.781088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.781103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.781238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.781254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.781514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.781530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.781735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.781751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.781954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.781970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.782177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.782193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.782400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.782416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.782551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.782567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.782769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.782785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.782975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.782991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.783192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.783210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.783479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.783494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.783624] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:27:13.654 [2024-07-15 22:05:07.783681] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.654 [2024-07-15 22:05:07.783692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.783850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.783864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.784068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.784083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.784342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.784359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.784499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.784514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.784741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.784757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.784891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.784907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.785141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.785157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.785368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.785386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.785582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.785600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.785731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.785749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.785890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.785909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.786105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.786123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-15 22:05:07.786213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-15 22:05:07.786235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.786454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.786471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.786685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.786702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.786924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.786941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.787132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.787149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.787287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.787305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.787463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.787481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.787741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.787757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.787979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.787995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.788141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.788159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.788396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.788412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.788626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.788642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.788899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.788915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.789196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.789212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.789416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.789437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.789584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.789599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.789818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.789834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.790041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.790058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.790251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.790269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.790411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.790429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.790668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.790683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.790890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.790907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.791099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.791115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.791308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.791325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.791500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.791517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.791730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.791747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.791951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.791968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.792175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.792191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.792364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.792381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.792507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.792523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.792716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.793021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.793037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.793238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.793255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.793458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.793475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.793638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.793654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.793872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.793889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.794091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.794107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.794238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.794255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.794460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.794476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.794679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.794695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.794928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-15 22:05:07.794945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-15 22:05:07.795117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.795134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.795322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.795340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.795533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.795550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.795767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.795784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.796054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.796071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.796228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.796246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.796365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.796382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.796596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.796611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.796869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.796885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.797146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.797162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.797292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.797308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.797513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.797529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.797808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.797825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.798114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.798133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.798334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.798350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.798501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.798516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.798732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.798748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.798865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.798881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.799139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.799155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.799401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.799418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.799616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.799632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.799781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.799797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.800026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.800043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.800302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.800318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.800518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.800535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.800689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.800705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.800976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.800992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.801114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.801130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.801404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.801421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.801611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.801627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.801793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.801946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.801963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.802114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.802130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.802385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.802401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.802661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.802678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-15 22:05:07.802962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-15 22:05:07.802978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.803264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.803280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.803425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.803441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.803643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.803659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.803865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.803881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.804085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.804101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.804254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.804271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.804579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.804595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.804805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.804821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.805912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.805929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.806162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.806178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.806305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.806321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.806514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.806531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.806717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.806735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.806996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.807138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.807277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.807440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.807655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.807857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.807872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.808063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.808079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.808295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.808313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.808531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.808548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.808748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.808764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.808982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.808998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.809139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.809155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.809354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.809370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.809520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.809537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.809775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.809790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.809932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.809948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.810136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.810408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.810425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.810691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.810707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.810857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.810873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.811016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.811033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.811243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-15 22:05:07.811259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-15 22:05:07.811448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.811464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.811720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.811736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.811932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.811946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.812142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.812157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.812287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.812303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.812491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.812507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.812601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.812617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.812874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.812889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.813027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.658 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.813281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.813311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.813533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.813563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.813772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.813785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.813986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.813999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.814194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.814205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.814514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.814526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.814662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.814674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.814869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.814881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.815897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.815909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.816187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.816199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.816396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.816409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.816597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.816608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.816751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.816763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.816961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.816973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.817191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.817202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.817394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.817407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.817583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.817596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.817723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.817735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.817913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.817925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.818125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.818137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.818360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.818373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.818452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.818464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.818645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.818658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.818900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.818912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.819156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.819168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.819365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.819378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-15 22:05:07.819576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-15 22:05:07.819588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.819713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.819726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.819915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.819928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.820100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.820112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.820333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.820347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.820556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.820568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.820767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.820779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.820972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.820985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.821171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.821183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.821362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.821374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.821624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.821637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.821813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.821826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.822958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.822972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.823148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.823160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.823295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.823424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.823437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.823628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.823640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.823762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.823774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.824038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.824051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.824179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.824191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.824435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.824448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.824562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.824574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.824718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.824730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.825984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.825996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.826127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.826139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.826407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.826419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.826612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.826625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.826818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.826830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.827059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.827071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.827210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.827222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.827444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.827456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-15 22:05:07.827651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-15 22:05:07.827663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.827792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.827804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.827940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.827952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.828229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.828254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.828374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.828387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.828486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.828497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.828717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.828728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.828907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.828918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.829893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.829905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.830045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.830058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.830326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.830338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.830530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.830544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.830689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.830701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.830893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.830904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.831147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.831366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.831379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.831672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.831685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.831875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.831887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.832153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.832165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.832358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.832371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.832635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.832647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.832918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.832931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.833141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.833153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.833338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.833351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.833604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.833616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.833887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.833899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.834162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.834175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.834356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.834613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.834625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.834806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.834818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.835014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.835026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.835272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.835284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.835460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.835472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.835682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.835694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.835931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.835943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.836125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-15 22:05:07.836137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-15 22:05:07.836324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.836337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.836546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.836558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.836827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.836839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.837082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.837094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.837345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.837357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.837645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.837658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.837924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.837936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.838065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.838078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.838269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.838281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.838543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.838554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.838887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.839112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.839124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.839414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.839426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.839541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.839552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.839820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.839833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.840087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.840101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.840279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.840291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.840600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.840612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.840806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.840819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.841111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.841123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.841389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.841402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.841663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.841675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.841916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.841927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.842185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.842197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.842455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.842467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.842649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.842661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.842879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.842890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.843131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.843144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.843338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.843351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-15 22:05:07.843530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-15 22:05:07.843542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.843659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.843670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.843925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.843937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.844126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.844137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.844358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.844370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.844505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.844517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.844780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.844792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.844917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.844929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.845191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.845203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.845454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.845466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.845644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.845656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.845902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.845914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.846158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.846170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.846440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.846453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.846719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.846731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.846905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.846917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.847143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.847155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.847350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.847363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.847607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.847619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.847891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.847903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.848145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.848157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.848407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.848419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.848704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.848716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.848945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.848957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.849078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.849091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.849306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.849318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.849566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.849580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.849824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.849836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.850014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.850027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.850291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.850304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.850426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.850438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.850703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.850716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.850966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.850978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.851227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.851240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.851365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.851376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.851570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.851582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.851890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.851902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.852180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.852192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.852440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.852453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.852646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.852658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.852927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.852939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-15 22:05:07.853131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-15 22:05:07.853143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.853255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.853267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.853377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.853388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.853576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.853587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.853850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.853861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.854046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.854057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.854361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.854373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.854553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.854565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.854805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.854817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.855035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.855047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.855309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.855322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.855562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.855574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.855827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.855839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.855961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.855973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.856174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.856186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.856362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.856375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.856638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.856649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.856837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.856849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.857121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.857133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.857332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.857344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.857532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.857543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.857795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.857807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.857986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.857998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.858263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.858275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.858491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.858742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.858755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.859053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.859064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.859260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.859272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.859465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.859477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.859657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.859669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.859938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.859950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.860136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.860147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.860415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.860428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.860692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.860704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.860841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.860853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.861002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-07-15 22:05:07.861244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.663 [2024-07-15 22:05:07.861257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.861497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.861508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.861707] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.664 [2024-07-15 22:05:07.861803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.861814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.862008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.862020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.862241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.862253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.862429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.862440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-07-15 22:05:07.862657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.664 [2024-07-15 22:05:07.862669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-15 22:05:07.862933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.862947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.863188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.863201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.863485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.863497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.863753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.863765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.863959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.863972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.864223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.864239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.864501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.864514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.864776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.864788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.865032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.865044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.865242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.865255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.865453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.865466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.865639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.865652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.865787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.865800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.866082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.866095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.866336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.866350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.866599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.866611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.866901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.866914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.867198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.867211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.867396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.867409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.867607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.867620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.867821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.867834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.868098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.868110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.868300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.868315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.868505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.868517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.868725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.868738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.868969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.869161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.869174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.869417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.869431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.869642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.869655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.869844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.869859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.869998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.870011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.870319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.870333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.870594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.870609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.870863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.870876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.871055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.871069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.871339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.871353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.871599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.871611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.871812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.871825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.872095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.872107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.872382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-15 22:05:07.872394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-15 22:05:07.872634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.872647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.872876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.872888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.873175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.873187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.873337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.873349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.873591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.873604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.873872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.873885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.874133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.874145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.874328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.874340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.874542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.874553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.874809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.874821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.875111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.875246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.875258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.875475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.875487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.875621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.875633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.875808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.875821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.876097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.876109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.876314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.876326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.876522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.876534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.876769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.876781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.876988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.877000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.877263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.877274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.877494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.877506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.877681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.877694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.877967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.877980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.878245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.878257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.878500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.878512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.878763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.878987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.878999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.879126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.879139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.879400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.879412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.879662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.879674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.879919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.879931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.880118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.880129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.880302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.880315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.880552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.880565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.880805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.880817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.881023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.881253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.881265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.881481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.881493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.881760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-15 22:05:07.881772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-15 22:05:07.882061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.882072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.882248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.882260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.882457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.882469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.882709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.882721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.882987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.882999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.883189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.883376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.883388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.883565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.883781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.883793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.883974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.883986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.884171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.884183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.884410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.884422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.884610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.884622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.884889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.884901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.885162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.885175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.885428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.885441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.885622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.885634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.885876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.885887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.886154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.886166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.886414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.886426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.886721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.886733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.886844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.886855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.887114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.887128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.887339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.887352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.887593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.887605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.887870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.887882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.888137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.888150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.888435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.888446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.888721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.888733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.888919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.888931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.889217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.889233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.889371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.889384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.889586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.889598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.889868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.889880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.890133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.890144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.890321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.890333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-15 22:05:07.890515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-15 22:05:07.890527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.890645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.890656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.890898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.890910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.891022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.891034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.891206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.891217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.891397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.891409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.891678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.891690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.891989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.892001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.892115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.892127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.892321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.892333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.892593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.892605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.892882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.892894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.893116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.893128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.893308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.893320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.893495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.893507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.893755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.893768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.893978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.893989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.894176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.894188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.894393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.894405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.894646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.894658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.894844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.894856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.895094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.895106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.895345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.895357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.895478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.895759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.895771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.895964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.895976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.896169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.896183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.896385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.896399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.896645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.896657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.896874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.896887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.897098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.897111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.897327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.897340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.897607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.897621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.897819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.897831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.898012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.898284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.898297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.898552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.898566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.898810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.898822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.899037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.899049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.899177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-15 22:05:07.899189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-15 22:05:07.899386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.899398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.899653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.899665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.899842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.899855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.900033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.900047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.900262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.900277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.900509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.900525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.900803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.900817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.901079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.901093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.901229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.901242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.901511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.901525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.901770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.901783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.902069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.902082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.902333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.902347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.902617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.902630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.902817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.902830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.903093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.903106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.903304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.903317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.903522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.903535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.903646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.903659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.903926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.903938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.904198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.904210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.904415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.904427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.904615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.904628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.904824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.904835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.904970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.904982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.905233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.905432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.905448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.905711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.905724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.905915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.905928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.906186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.906200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.906418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.906430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.906664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-15 22:05:07.906678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-15 22:05:07.906905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.906918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.907111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.907123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.907354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.907367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.907563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.907575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.907772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.907784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.908046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.908058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.908245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.908258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.908451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.908463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.908672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.908684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.908880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.908892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.909170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.909182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.909383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.909396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.909525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.909538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.909740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.909752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.909951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.909963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.910191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.910203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.910533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.910545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.910790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.910801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.910988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.910999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.911259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.911272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.911465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.911477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.911669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.911682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.911945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.911957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.912243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.912255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.912463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.912475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.912669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.912681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.912931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.912944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.913124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.913136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.913410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.913423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.913629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.913642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.913834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.913846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.914058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.914070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.914322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.914334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.914588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.914601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.914861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.914875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.915129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.915259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.915271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.915534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.915546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.915753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.915764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-15 22:05:07.915960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-15 22:05:07.915972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.916215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.916246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.916447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.916459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.916604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.916795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.916807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.917053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.917065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.917272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.917284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.917556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.917568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.917707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.917719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.917985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.917997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.918198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.918210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.918454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.918466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.918723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.918735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.918928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.918939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.919134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.919145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.919350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.919362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.919553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.919565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.919744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.919756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.920025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.920037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.920338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.920349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.920534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.920546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.920756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.920767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.920892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.920904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.921041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.921053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.921233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.921245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.921409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.921420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.921692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.921703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.922017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.922028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.922235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.922248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.922537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.922549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.922722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.922734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.923011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.923023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.923198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.923210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.923516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.923529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.923728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.923739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.923919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.923933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.924132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.924143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.924410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.924422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.924619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.924631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.924783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-15 22:05:07.924795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-15 22:05:07.925035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.925047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.925172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.925184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.925423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.925436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.925682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.925695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.925968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.925980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.926244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.926256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.926450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.926463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.926602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.926613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.926769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.926780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.926962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.926975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.927267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.927279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.927497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.927509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.927760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.927773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.928843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.928855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.929119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.929130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.929329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.929341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.929537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.929548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.929668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.929680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.929798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.929809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.930081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.930093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.930361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.930373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.930563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.930575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.930887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.930900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.931096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.931108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.931303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.931317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.931557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.931570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.931756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.931769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.932001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.932014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.932276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.932289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.932473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.932485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.932669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.932684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.932886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.932899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.933177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.933190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.933370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.933382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.933591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.933603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-15 22:05:07.933809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-15 22:05:07.933821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.934007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.934019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.934189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.934202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.934471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.934485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.934604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.934617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.934768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.934780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.935074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.935086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.935291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.935303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.935417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.935428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.935693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.935704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.935880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.935893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.936134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.936146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.936353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.936366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.936634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.936841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.936852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.937108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.937119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.937309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.937321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.937566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.937578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.937763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.937775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.938015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.938027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.938153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.938166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.938429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.938441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.938652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.938664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.938908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.938920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.939043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.939055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.939256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.939269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.939469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.939483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.939606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.939808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.940029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.940042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.940308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.940321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.940591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.940604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.940790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-15 22:05:07.940801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-15 22:05:07.940880] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.952 [2024-07-15 22:05:07.940912] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.952 [2024-07-15 22:05:07.940920] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.952 [2024-07-15 22:05:07.940928] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.952 [2024-07-15 22:05:07.940933] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.952 [2024-07-15 22:05:07.940979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.940993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.941041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:13.953 [2024-07-15 22:05:07.941136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.941146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.941149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:13.953 [2024-07-15 22:05:07.941285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:13.953 [2024-07-15 22:05:07.941339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.941351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.941285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:13.953 [2024-07-15 22:05:07.941534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.941546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.941741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.941753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.941864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.941875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.942143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.942155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.942391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.942403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.942641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.942653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.942903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.942915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.943184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.943196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.943450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.943462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.943604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.943616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.943861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.943873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.944053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.944065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.944317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.944330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.944507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.944519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.944698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.944711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.944855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.944867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.945125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.945137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.945326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.945338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.945468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.945480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.945741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.945754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.945888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.945900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.946085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.946097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.946311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.946323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.946569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.946581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.946842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.946854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.947033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.947045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.947307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.947319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.947576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.947588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.947782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.947794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.948026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.948039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.948280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.948293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.948536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.948549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.948679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.948691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.948888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.948901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.949097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.949109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.949305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.949318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-15 22:05:07.949554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-15 22:05:07.949568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.949687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.949699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.949897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.949909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.950100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.950112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.950304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.950317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.950529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.950541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.950717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.950729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.950919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.950931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.951044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.951056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.951325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.951337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.951593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.951607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.951849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.951861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.952146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.952159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.952407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.952420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.952691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.952704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.952883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.952895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.953160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.953173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.953387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.953400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.953672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.953685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.953978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.953990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.954243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.954257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.954498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.954642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.954656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.954847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.954860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.955072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.955084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.955353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.955366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.955631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.955644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.955918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.955932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.956056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.956068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.956253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.956267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.956539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.956553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.956796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.956809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.957037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.957051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.957174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.957187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.957452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.957466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.957713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.957726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.958015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.958029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.958156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.958169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.958394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.958407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.958656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.958669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.958808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.958824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-15 22:05:07.959067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-15 22:05:07.959081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.959275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.959472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.959486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.959682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.959695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.959963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.959977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.960165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.960178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.960375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.960389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.960565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.960578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.960835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.960847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.961028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.961040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.961306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.961319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.961569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.961581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.961773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.961788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.962010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.962023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.962201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.962213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.962465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.962672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.962684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.962861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.962872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.963116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.963129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.963328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.963341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.963474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.963485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.963765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.963777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.963918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.963931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.964111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.964125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.964366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.964378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.964670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.964682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.964815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.965092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.965105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.965370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.965383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.965563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.965575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.965787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.965800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.966067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.966080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.966362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.966375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.966500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.966512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.966724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.966737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.967027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.967039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.967234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.967247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.967451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.967465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.967678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.967690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.968014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.968028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-15 22:05:07.968240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-15 22:05:07.968253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.968495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.968507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.968750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.968763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.968876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.968888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.969154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.969166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.969430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.969442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.969685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.969697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.969892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.969905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.970115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.970127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.970386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.970582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.970594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.970834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.970847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.971113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.971125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.971309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.971323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.971640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.971652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.971907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.972101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.972113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.972314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.972327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.972612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.972624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.972815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.972828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.972973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.972985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.973240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.973252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.973474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.973486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.973670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.973682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.973900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.973913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.974156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.974168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.974439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.974452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.974702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.974715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.974887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.974900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.975032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.975044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.975297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.975310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.975552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.975564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.975760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.975772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.975859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.975871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.976976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.976990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.977122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-15 22:05:07.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-15 22:05:07.977264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.977277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.977460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.977473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.977712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.977938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.977950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.978069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.978081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.978229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.978241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.978497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.978509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.978755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.978767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.978960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.978972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.979926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.979938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.980982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.980994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.981926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.981938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.982956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.982968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.983081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-15 22:05:07.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-15 22:05:07.983349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.983363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.983538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.983550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.983743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.983755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.983949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.983963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.984142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.984155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.984281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.984294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.984468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.984480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.984744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.984756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.984952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.984965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.985104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.985116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.985326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.985338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.985554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.985569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.985780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.985793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.985981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.985994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.986169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.986182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.986450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.986464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.986735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.986748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.986950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.986963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.987154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.987167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.987365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.987377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.987486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.987693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.987705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.987884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.987897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.988083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.988272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.988819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.988994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.989183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.989195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.989409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.989444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.989639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.989655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.989801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.989816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.990067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.990082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.990334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.990350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.990547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.990564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.990682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.990698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.990955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.990970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.991198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.991214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.991513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.991529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.991722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-15 22:05:07.991737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-15 22:05:07.991880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.991894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.992080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.992095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.992289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.992309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.992512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.992528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.992775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.992791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.992990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.993007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.993280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.993299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.993489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.993508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.993659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.993675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.993935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.993952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.994085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.994105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.994358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.994377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.994569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.994586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.994789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.994805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.994937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.994952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.995088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.995103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.995386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.995402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.995606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.995621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.995764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.995779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.995969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.995985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.996238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.996254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.996436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.996452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.996638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.996654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.996853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.996869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.997088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.997105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.997244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.997260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.997443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.997665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.997680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.997822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.997837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.998078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.998099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.998288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.998300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.998493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.998505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.998771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.998783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.998903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.998914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.999118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.999131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.999304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.999316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.999569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.999581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.999782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.999794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:07.999942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:07.999953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:08.000078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:08.000089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-15 22:05:08.000259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-15 22:05:08.000272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.000532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.000544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.000847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.000861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.000996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.001189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.001351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.001486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.001752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.001961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.001972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.002169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.002180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.002366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.002634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.002645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.002768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.002779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.002867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.002879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.003119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.003130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.003304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.003316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.003497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.003510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.003695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.003707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.003842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.004122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.004134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.004321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.004333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.004595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.004608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.004733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.004744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.004937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.004949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.005141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.005153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.005342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.005354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.005537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.005669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.005681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.005896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.005911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.006121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.006141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.006434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.006599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.006616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.006893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.006907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.007100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.007115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.007337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.007354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.007479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.007494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.007709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.007724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.007909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.007924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.008171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.008187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.008389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.008405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.008654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.008669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-15 22:05:08.008813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-15 22:05:08.008828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.008963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.009187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.009201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.009391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.009407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.009591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.009606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.009796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.009812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.010022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.010037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.010286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.010302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.010569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.010584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.010783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.010799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.010984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.010999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.011295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.011310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.011517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.011532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.011758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.011773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.011909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.011925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.012060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.012078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.012293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.012309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.012451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.012466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.012739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.012754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.012953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.012969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.013151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.013166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.013354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.013370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.013573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.013587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.013778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.013793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.014043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.014058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.014258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.014273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.014393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.014408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.014689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.014704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.014912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.014927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.015135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.015151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.015287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.015303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.015484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.015500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.015697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.015713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.015862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.015877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.961 [2024-07-15 22:05:08.016013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.961 [2024-07-15 22:05:08.016028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.961 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.016237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.016253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.016373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.016389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.016689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.016705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.016952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.016967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.017178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.017193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.017418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.017434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.017682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.017697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.017826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.017844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.018911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.018926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.019110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.019127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.019308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.019322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.019478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.019494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.019631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.019646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.019857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.019872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.020927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.020943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.021064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.021079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.021277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.021550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.021566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.021749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.021765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.021964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.021980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.022191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.022206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.022409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.022426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.022609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.022623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.022811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.022826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.022957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.022974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.023172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.023187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.023403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.023419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.023611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.023627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.023897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.023912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.024115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.962 [2024-07-15 22:05:08.024131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.962 qpair failed and we were unable to recover it. 00:27:13.962 [2024-07-15 22:05:08.024348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.024364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.024568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.024584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.024853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.024869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.025071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.025086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.025213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.025233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.025483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.025500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.025696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.025711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.025852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.025868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.026854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.026869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.027136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.027152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.027270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.027286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.027481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.027497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.027768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.027783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.027966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.027981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.028200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.028216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.028467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.028483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.028669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.028684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.028903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.028917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.029041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.029057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.029192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.029207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.029490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.029506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.029693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.029709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.029844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.029859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.030065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.030080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.030228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.030244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.030387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.030402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.030682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.030697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.030884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.030900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.031041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.031056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.031331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.031348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.031562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.031601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.031702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.031719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.031938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.031953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.032137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.032152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.032374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.032390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.032598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.032613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.963 qpair failed and we were unable to recover it. 00:27:13.963 [2024-07-15 22:05:08.032801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-07-15 22:05:08.032816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.032947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.032963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.033155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.033171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.033304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.033320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.033594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.033610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.033738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.033754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.033867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.033883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.034981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.034996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.035199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.035214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.035416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.035431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.035616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.035631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.035817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.035832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.036016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.036032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.036174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.036189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.036487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.036503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.036780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.036795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.037097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.037113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.037327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.037342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.037543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.037558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.037689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.037705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.037914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.037930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.038062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.038078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.038273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.038288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.038561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.038576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.038759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.038774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.038995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.039130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.039331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.039516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.039784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.039984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.039999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.040125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.040140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.040413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.040429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.040609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.040625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.040770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.040923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-07-15 22:05:08.040939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.964 qpair failed and we were unable to recover it. 00:27:13.964 [2024-07-15 22:05:08.041066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.041082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.041336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.041351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.041475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.041491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.041693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.041710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.042029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.042044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.042322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.042338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.042472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.042487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.042738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.042754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.042953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.042969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.043163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.043178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.043370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.043386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.043609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.043624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.043808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.043824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.043974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.043989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.044243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.044259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.044535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.044549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.044754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.044769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.044961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.044976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.045259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.045275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.045534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.045549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.045769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.045787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.046846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.046861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.047074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.047282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.047444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.047655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.047868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.047989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.048005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.048142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.048158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.048355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.048372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.048491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-07-15 22:05:08.048507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.965 qpair failed and we were unable to recover it. 00:27:13.965 [2024-07-15 22:05:08.048722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.048738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.048872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.048887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.049119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.049134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.049270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.049286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.049468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.049483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.049752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.049768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.050019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.050035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.050219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.050239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.050381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.050396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.050612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.050627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.050829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.050845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.051946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.051961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.052141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.052157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.052408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.052423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.052605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.052620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.052843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.052859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.053051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.053067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.053213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.053232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.053509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.053525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.053730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.053746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.053896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.053912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.054113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.054129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.054411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.054660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.054675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.054816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.054831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.055918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.055933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.056125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.056141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.056389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.056405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.056631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.056646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.056840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.056856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.057047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.057063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.966 qpair failed and we were unable to recover it. 00:27:13.966 [2024-07-15 22:05:08.057195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.966 [2024-07-15 22:05:08.057211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.057359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.057375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.057516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.057531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.057696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.057712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.057898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.057913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.058077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.058093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.058236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.058252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.058474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.058490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.058612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.058627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.058838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.058853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.059815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.059830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.060889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.060904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.061104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.061263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.061278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.061482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.061500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.061695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.061852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.061868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.062072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.062227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.062594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.062865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.062990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.063193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.063346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.063558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.063770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.063910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.063925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.064044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.064059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.064187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.064203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.064411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.967 [2024-07-15 22:05:08.064427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.967 qpair failed and we were unable to recover it. 00:27:13.967 [2024-07-15 22:05:08.064625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.064641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.064848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.064863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.065079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.065093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.065238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.065255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.065446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.065651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.065666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.065819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.065834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.066023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.066038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.066289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.066304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.066439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.066454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.066675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.066692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.066979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.066995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.067109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.067124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.067420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.067436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.067627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.067642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.067824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.067839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.068017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.068033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.068183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.068455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.068471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.068744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.069930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.069945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.070081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.070096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.070283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.070300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.070438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.070453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.070585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.070600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.070805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.070820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.071011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.071026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.071233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.071249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.071432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.071448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.071646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.071662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.071882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.071897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.072047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.072187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.072426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.072693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.968 [2024-07-15 22:05:08.072848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.968 qpair failed and we were unable to recover it. 00:27:13.968 [2024-07-15 22:05:08.072979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.072994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.073194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.073208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.073442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.073457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.073679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.073695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.073891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.073906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.073999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.074015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.074164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.074179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.074475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.074490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.074689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.074704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.074973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.074988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.075126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.075142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.075338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.075354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.075547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.075563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.075777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.075921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.075936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.076139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.076154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.076300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.076315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.076566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.076582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.076816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.076832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.076948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.076965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.077256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.077271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.077471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.077487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.077690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.077706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.077895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.077913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.078977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.078994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.079179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.079379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.079396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.079599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.079614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.079743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.079759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.079943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.079959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.080140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.080155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.080298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.080314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.080500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.080516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.969 [2024-07-15 22:05:08.080712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.969 [2024-07-15 22:05:08.080728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.969 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.080912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.080927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.081068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.081084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.081334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.081349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.081547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.081562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.081834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.081850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.082051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.082067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.082299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.082315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.082566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.082582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.082786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.082802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.082944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.082960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.083210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.083233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.083373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.083389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.083607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.083622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.083847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.083863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.084121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.084137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.084280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.084296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.084483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.084499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.084707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.084723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.084972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.084988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.085936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.085951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.086097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.086114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.086228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.086241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.086371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.086383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.086573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.086585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.086777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.086789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.087897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.087910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.088035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.088046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.088247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.088258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.970 qpair failed and we were unable to recover it. 00:27:13.970 [2024-07-15 22:05:08.088380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.970 [2024-07-15 22:05:08.088395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.088661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.088673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.088888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.088900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.089864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.089995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.090288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.090477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.090673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.090757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.090956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.090968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.091089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.091101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.091290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.091303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.091500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.091512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.091685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.091697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.091807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.091819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.092932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.092945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.093071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.093083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.093295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.093307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.093468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.093670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.093681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.093888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.093900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.094985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.094997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.095172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.095184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.095366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.095379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.095580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.095591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.095770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.971 [2024-07-15 22:05:08.095783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.971 qpair failed and we were unable to recover it. 00:27:13.971 [2024-07-15 22:05:08.095914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.095926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.096985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.096997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.097864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.098066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.098078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.098287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.098299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.098504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.098517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.098705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.098717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.099788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.099800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.100038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.100049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.100227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.100239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.100482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.100494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.100759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.100771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.100899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.100910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.101890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.101902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.102157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.102168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.102297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.102308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.102503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.102668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.102680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.102807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.102821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.103003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.972 [2024-07-15 22:05:08.103016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.972 qpair failed and we were unable to recover it. 00:27:13.972 [2024-07-15 22:05:08.103141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.103281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.103480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.103587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.103787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.103988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.103999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.104204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.104215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.104423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.104434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.104630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.104642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.104830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.104842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.105052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.105064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.105306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.105317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.105596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.105607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.105789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.105801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.105933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.105944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.106145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.106156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.106343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.106355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.106620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.106632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.106760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.106773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.106989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.107216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.107377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.107573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.107761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.107970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.107982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.108136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.108155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.108343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.108360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.108546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.108753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.108768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.108948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.108963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.109235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.109251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.109503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.109518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.109711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.109727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.109929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.109944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.110075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.110091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.110218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.110237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.110379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.110395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.110659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.110674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.110874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.110890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.111163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.111178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.973 [2024-07-15 22:05:08.111387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.973 [2024-07-15 22:05:08.111402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.973 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.111621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.111636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.111824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.111839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.111998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.112142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.112321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.112477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.112682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.112900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.112916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.113118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.113134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.113320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.113336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.113431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.113446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.113665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.113683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.113890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.113905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.114107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.114122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.114330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.114346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.114537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.114553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.114680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.114696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.114882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.114897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.115080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.115095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.115293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.115308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.115435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.115451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.115633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.115785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.115801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.116001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.116016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.116303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.116319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.116534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.116549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.116794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.116809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.117003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.117019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.117209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.117235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.117420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.117435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.117617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.117632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.117905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.117920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.118075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.118091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.118342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.118559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.118574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.118713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.974 [2024-07-15 22:05:08.118728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.974 qpair failed and we were unable to recover it. 00:27:13.974 [2024-07-15 22:05:08.118975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.118991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.119188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.119203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.119344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.119362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.119614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.119629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.119825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.119841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.120919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.120935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.121138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.121154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.121455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.121471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.121669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.121685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.121828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.121843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.122900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.122916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.123039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.123055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.123246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.123262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.123456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.123472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.123693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.123708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.123914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.123929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.124112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.124128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.124339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.124355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.124498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.124513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.124596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.124612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.124810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.124825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.125955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.125970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.126160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.126175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.126386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.126402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.126509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.126524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.126736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.975 [2024-07-15 22:05:08.126751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.975 qpair failed and we were unable to recover it. 00:27:13.975 [2024-07-15 22:05:08.126885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.126900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.127096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.127111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.127390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.127406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.127598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.127614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.127861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.128014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.128029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.128143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.128158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.128436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.128452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.128698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.128714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.128829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.128845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.129959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.129974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.130125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.130140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.130406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.130420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.130666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.130682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.130954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.130970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.131102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.131118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.131333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.131349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.131506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.131522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.131723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.131739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.132029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.132044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.132304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.132319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.132513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.132529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.132671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.132687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.132865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.132881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.133143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.133160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.133357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.133567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.133583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.133777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.133792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.133976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.133991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.134243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.134258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.134467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.134482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.134668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.134683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.134882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.134897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.135148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.135163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.135284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.135300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.135492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.976 [2024-07-15 22:05:08.135508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.976 qpair failed and we were unable to recover it. 00:27:13.976 [2024-07-15 22:05:08.135622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.135638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.135913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.135928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.136142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.136291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.136520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.136668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.136807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.136994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.137009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.137144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.137159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.137431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.137446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.137629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.137645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.137860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.137876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.138015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.138030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.138215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.138240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.138388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.138404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.138597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.138615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.138815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.138830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.139008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.139023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.139282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.139298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.139425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.139441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.139626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.139841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.139857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.140036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.140052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.140326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.140341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.140544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.140559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.140679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.140694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.140895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.140911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.141044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.141059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.141192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.141207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.141464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.141480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.141612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.141627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.141836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.141853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.142104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.142119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.142304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.142320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.142456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.142472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.142750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.142765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.143022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.143038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.143168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.143185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.143321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.977 [2024-07-15 22:05:08.143336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.977 qpair failed and we were unable to recover it. 00:27:13.977 [2024-07-15 22:05:08.143543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.143559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.143753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.143769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.143887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.143902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.144018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.144035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.144158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.144172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.144423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.144439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.144622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.144637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.144752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.144767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.145041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.145057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.145355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.145371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.145575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.145590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.145867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.145882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.146092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.146108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.146254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.146270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.146547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.146562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.146716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.146732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.146864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.146880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.147103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.147123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.147336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.147351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.147556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.147572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.147772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.147788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.147938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.147953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.148201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.148216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.148349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.148365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.148508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.148524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.148644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.148660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.148786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.148801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.149062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.149077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.149261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.149277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.149528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.149543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.149663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.149682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.149930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.149945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.150078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.978 [2024-07-15 22:05:08.150093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.978 qpair failed and we were unable to recover it. 00:27:13.978 [2024-07-15 22:05:08.150366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.150382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.150562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.150729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.150745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.151030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.151046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.151243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.151258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.151460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.151475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.151761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.151777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.151914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.151930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.152116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.152131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.152336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.152352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.152480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.152496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.152755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.152771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.153051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.153066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.153219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.153239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.153425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.153441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.153585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.153599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.153806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.153820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.154071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.154086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.154210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.154233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.154452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.154468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.154587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.154603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.154872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.154887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.155078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.155093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.155342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.155357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.155566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.155584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.155736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.155751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.155935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.155950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.156155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.156170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.156492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.156508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.156720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.156735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.156881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.156897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.157117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.157132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.157408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.157426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.157567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.157583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.157723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.157738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.157951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.158195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.158210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.158407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.158422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.158652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.158667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.158801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.979 [2024-07-15 22:05:08.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.979 qpair failed and we were unable to recover it. 00:27:13.979 [2024-07-15 22:05:08.159025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.159040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.159221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.159240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.159514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.159530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.159693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.159709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.159891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.159906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.160043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.160058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.160280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.160297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.160429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.160444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.160635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.160650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.160849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.160864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.161069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.161084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.161310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.161329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.161581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.161596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.161844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.161860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.162066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.162082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.162290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.162305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.162493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.162509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.162694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.162709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.162893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.162908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.163020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.163036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.163210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.163230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.163439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.163455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.163728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.163743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.163873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.163888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.164018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.164033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.164219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.164239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:13.980 [2024-07-15 22:05:08.164365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.980 [2024-07-15 22:05:08.164381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:13.980 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.164582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.164598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.164797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.165013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.165028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.165154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.165170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.165364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.165380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.165656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.165670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.165855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.165871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.166071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.166087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.166223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.166242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.166448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.166464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.166678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.166693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.166882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.166900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.167156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.167172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.167263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.167278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.167397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.167412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.167560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.167576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.167805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.167821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.168020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.168035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.168238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.168253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.168448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.168464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.168588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.168604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.168801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.168818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.169962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.169977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.170203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.170218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.170355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.170371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.170558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.170573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.170767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.170781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.170899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.170914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.264 qpair failed and we were unable to recover it. 00:27:14.264 [2024-07-15 22:05:08.171044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.264 [2024-07-15 22:05:08.171059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.171156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.171171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.171361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.171377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.171558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.171573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.171778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.171982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.172001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.172198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.172213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.172361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.172377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.172563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.172579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.172765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.172780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.173006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.173022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.173283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.173299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.173512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.173528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.173734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.173749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.173952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.173967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.174157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.174172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.174374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.174390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.174579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.174595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.174778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.174794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.174996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.175012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.175289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.175305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.175439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.175455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.175640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.175656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.175929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.175945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.176140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.176362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.176379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.176579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.176782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.176797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.176945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.176961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.177185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.177200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.177322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.177338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.177532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.177548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.177733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.177748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.177946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.177962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.178146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.178161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.178346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.178363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.178503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.178519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.178652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.178667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.178940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.178956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.179111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.179126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.265 [2024-07-15 22:05:08.179359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.265 [2024-07-15 22:05:08.179376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.265 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.179583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.179598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.179845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.179860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.179986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.180120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.180335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.180512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.180735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.180890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.180902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.181942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.181954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.182121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.182133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.182375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.182388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.182517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.182529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.182642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.182654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.182875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.182889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.183910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.183922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.184984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.184995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.185123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.185135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.185402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.185414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.185669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.185681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.185868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.185880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.186063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.186075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.186276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.186288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.186410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.186423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.186627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.186639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.266 [2024-07-15 22:05:08.186786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.266 [2024-07-15 22:05:08.186798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.266 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.186931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.186943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.187186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.187197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.187320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.187332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.187462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.187475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.187694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.187706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.187894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.187906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.188012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.188024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.188217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.188232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.188417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.188430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.188545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.188836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.188848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.189909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.189921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.190909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.190920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.191937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.191949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.192120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.192132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.192237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.192249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.192439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.192451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.192642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.192654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.192863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.192875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.193086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.193098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.193366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.193378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.193585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.193597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.193727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.193739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.193817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.193829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.194087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.267 [2024-07-15 22:05:08.194099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.267 qpair failed and we were unable to recover it. 00:27:14.267 [2024-07-15 22:05:08.194235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.194247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.194492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.194504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.194611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.194623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.194710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.194911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.194923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.195110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.195123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.195298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.195310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.195543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.195555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.195756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.195768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.195893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.195905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.196032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.196044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.196282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.196294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.196427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.196439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.196638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.196650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.196837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.196849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.197927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.197938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.198091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.198104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.198301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.198313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.198556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.198772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.198784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.198907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.198918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.199032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.199285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.199431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.199570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.199779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.199990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.200002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.268 [2024-07-15 22:05:08.200130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.268 [2024-07-15 22:05:08.200143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.268 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.200351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.200363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.200481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.200494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.200648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.200661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.200846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.200857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.201891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.201903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.202937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.202949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.203896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.203908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.204929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.204943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.205839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.205851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.206119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.206370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.206489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.269 [2024-07-15 22:05:08.206890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.269 qpair failed and we were unable to recover it. 00:27:14.269 [2024-07-15 22:05:08.206998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.207127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.207372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.207538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.207751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.207872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.207884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.208923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.208935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.209911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.210117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.210130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.210322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.210335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.210528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.210541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.210700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.210712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.210897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.210910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.211034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.211293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.211305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.211562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.211705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.211717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.211841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.211852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.212046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.212058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.212210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.212221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.212467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.212683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.212696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.212874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.212886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.213084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.213096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.213265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.213277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.213519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.213531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.213671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.213682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.213879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.213893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.214078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.214090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.214336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.270 [2024-07-15 22:05:08.214349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.270 qpair failed and we were unable to recover it. 00:27:14.270 [2024-07-15 22:05:08.214463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.214475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.214674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.214686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.214793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.214804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.214995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.215147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.215287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.215441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.215583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.215779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.215790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.216966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.216980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.217103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.217115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.217306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.217316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.217506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.217517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.217720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.217730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.217915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.217925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.218869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.218880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.219920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.219931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.220983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.220993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.271 [2024-07-15 22:05:08.221115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.271 [2024-07-15 22:05:08.221125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.271 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.221301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.221310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.221486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.221495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.221615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.221625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.221743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.221753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.221858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.221868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.222839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.222848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.223895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.223904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.224929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.224939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.225866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.225876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.226049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.226059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.226256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.226267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.226380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.226505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.226515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.272 qpair failed and we were unable to recover it. 00:27:14.272 [2024-07-15 22:05:08.226651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.272 [2024-07-15 22:05:08.226661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.226834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.226844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.226929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.226938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.227961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.227971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.228956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.228966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.229914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.229924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.230109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.230118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.230387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.230398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.230504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.230514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.230690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.230700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.230810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.230820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.231978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.231987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.273 qpair failed and we were unable to recover it. 00:27:14.273 [2024-07-15 22:05:08.232873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.273 [2024-07-15 22:05:08.232883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.233866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.233876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.234947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.234957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.235963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.235973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.236165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.236353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.236364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.236504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.236515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.236706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.236716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.236931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.236941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.237186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.237196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.237319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.237329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.237454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.237464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.237662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.237672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.237855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.237865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.238896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.238906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.239034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.239044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.239222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.239236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.239422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.239432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.239624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.239634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.274 qpair failed and we were unable to recover it. 00:27:14.274 [2024-07-15 22:05:08.239817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.274 [2024-07-15 22:05:08.239827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.239934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.239943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.240116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.240127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.240394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.240404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.240582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.240591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.240776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.240787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.240919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.240929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.241204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.241214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.241342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.241352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.241612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.241622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.241800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.241810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.241985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.241995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.242209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.242219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.242425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.242436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.242528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.242538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.242741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.242751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.242881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.242891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.243029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.243041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.243307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.243515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.243525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.243704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.243714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.243919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.243929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.244117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.244128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.244269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.244279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.244522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.244532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.244730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.244740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.244865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.244875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.245071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.245081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.245263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.245273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.245462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.245472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.245650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.245660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.245862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.245872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.246884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.246990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.247000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.247184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.247194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.247437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.275 [2024-07-15 22:05:08.247448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.275 qpair failed and we were unable to recover it. 00:27:14.275 [2024-07-15 22:05:08.247693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.247703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.247814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.247823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.247949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.247959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.247980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e050 (9): Bad file descriptor 00:27:14.276 [2024-07-15 22:05:08.248150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.248173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.248318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.248337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.248532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.248547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.248813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.248827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.249034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.249184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.249432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.249648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.249860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.249992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.250002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.250248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.250258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.250528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.250538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.250657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.250667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.250892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.251981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.252209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.252219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.252412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.252422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.252551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.252561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.252725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.252936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.252946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.253188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.253198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.253396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.253408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.253600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.253610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.253807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.253817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.254824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.254834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.276 [2024-07-15 22:05:08.255044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.276 [2024-07-15 22:05:08.255053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.276 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.255250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.255260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.255398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.255408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.255514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.255524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.255767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.255777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.255894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.255905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.256080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.256090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.256278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.256288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.256539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.256549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.256725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.256734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.256862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.256872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.257986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.257996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.258927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.258937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.259127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.259136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.259257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.259266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.259504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.259513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.259629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.259640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.259816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.259826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.260084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.260093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.260291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.260301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.260392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.260403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.260581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.260591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.260808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.260818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.261021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.261031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.261215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.261229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.277 [2024-07-15 22:05:08.261370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.277 [2024-07-15 22:05:08.261380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.277 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.261503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.261513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.261625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.261634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.261910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.261920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.262051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.262061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.262252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.262262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.262380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.262390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.262596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.262606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.262749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.262758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.263942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.263952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.264095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.264287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.264297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.264480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.264489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.264679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.264689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.264882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.264892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.265937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.265947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.266875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.266884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.267934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.267944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.268117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.268200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.268209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.268283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.268293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.278 qpair failed and we were unable to recover it. 00:27:14.278 [2024-07-15 22:05:08.268402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.278 [2024-07-15 22:05:08.268412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.268589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.268599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.268787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.268796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.268917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.268927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.269832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.269842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.270878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.270888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.271066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.271075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.271329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.271339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.271533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.271543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.271811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.271821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.272917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.272927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.273965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.273974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.274258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.274458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.274470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.274590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.274599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.274811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.274820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.275010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.275020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.275131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.275140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.275396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.275406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.279 [2024-07-15 22:05:08.275648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.279 [2024-07-15 22:05:08.275657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.279 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.275841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.275850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.275984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.275994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.276186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.276195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.276318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.276327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.276570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.276579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.276769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.276778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.276901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.276911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.277983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.277993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.278125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.278363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.278373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.278499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.278509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.278717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.278727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.278921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.278931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.279943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.279953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.280138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.280148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.280265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.280275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.280389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.280399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.280578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.280744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.280754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.281883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.281998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.282007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.282247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.282257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.282435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.282445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.282582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.282592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.280 qpair failed and we were unable to recover it. 00:27:14.280 [2024-07-15 22:05:08.282794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.280 [2024-07-15 22:05:08.282804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.282992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.283753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.283995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.284179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.284362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.284568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.284716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.284899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.284909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.285898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.285907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.286978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.286991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.287185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.287199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.287355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.287370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.287552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.287565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.287765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.287778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.287976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.287989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.288240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.288254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.288516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.288530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.288741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.288754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.288866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.288887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.289028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.289044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.289313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.281 [2024-07-15 22:05:08.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.281 qpair failed and we were unable to recover it. 00:27:14.281 [2024-07-15 22:05:08.289467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.289480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.289675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.289689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.289864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.289879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.290029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.290042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.290178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.290192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.290385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.290399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.290650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.290875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.290889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.291081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.291096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.291287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.291302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.291435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.291448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.291645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.291660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.291851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.291865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.292083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.292097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.292253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.292267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.292447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.292461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.292669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.292682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.292802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.292816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.293927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.293941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.294101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.294122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.294250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.294266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.294458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.294472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.294680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.294694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.294886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.294900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.295148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.295287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.295302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.295424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.295438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.295626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.295640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.295771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.295785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.296005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.296018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.296212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.296231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.296349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.296363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.296558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.296572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.296767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.296780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.297035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.297048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.282 [2024-07-15 22:05:08.297212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.282 [2024-07-15 22:05:08.297231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.282 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.297507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.297521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.297646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.297659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.297776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.297790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.297920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.297933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.298968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.298982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.299120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.299136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.299265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.299464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.299477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.299751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.299764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.299959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.300247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.300262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.300390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.300404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.300602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.300616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.300795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.300808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.300994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.301148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.301298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.301539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.301679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.301836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.301849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.302865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.302879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.303848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.304056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.304072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.304207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.304220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.304424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.304440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.283 [2024-07-15 22:05:08.304639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.283 [2024-07-15 22:05:08.304653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.283 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.304769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.304783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.304930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.304944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.305059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.305073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.305274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.305288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.305462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.305476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.305725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.305937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.305950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.306079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.306093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.306283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.306297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.306552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.306566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.306702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.306715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.306842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.306856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.307872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.307885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.308007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.308021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.308196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.308209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.308464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.308477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.308644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.308658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.308935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.309083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.309098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.309309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.309324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.309455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.309754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.309767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.310878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.310892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.311080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.311094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.311243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.311257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.311437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.311451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.311652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.311666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.311926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.311940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.312022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.312035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.312253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.312267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.312458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.284 [2024-07-15 22:05:08.312473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.284 qpair failed and we were unable to recover it. 00:27:14.284 [2024-07-15 22:05:08.312608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.312622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.312757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.312770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.313850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.313864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.314047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.314060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.314239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.314250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.314442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.314453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.314715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.314726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.314836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.314846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.315956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.315965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.316219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.316232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.316339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.316349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.316478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.316487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.316735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.316746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.316889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.316899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.317983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.317993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.318129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.318139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.318351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.318362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.318552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.318561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.318673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.318685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.318800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.285 [2024-07-15 22:05:08.318811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.285 qpair failed and we were unable to recover it. 00:27:14.285 [2024-07-15 22:05:08.319054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.319198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.319402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.319624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.319746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.319946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.319956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.320118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.320128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.320240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.320250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.320383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.320576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.320586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.320810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.320819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.321866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.321876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.322114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.322123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.322303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.322313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.322507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.322517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.322764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.322774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.322908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.322918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.323965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.323975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.324106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.324116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.324355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.324366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.324508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.324518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.324780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.324790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.286 [2024-07-15 22:05:08.325913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.286 [2024-07-15 22:05:08.325923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.286 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.326896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.326905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.327926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.327935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.328964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.328974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.329926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.329936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.330976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.330986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.331883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.331893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.332002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.332011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.332149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.287 [2024-07-15 22:05:08.332159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.287 qpair failed and we were unable to recover it. 00:27:14.287 [2024-07-15 22:05:08.332271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.332283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.332472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.332482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.332592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.332602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.332775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.332784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.332907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.332917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.333906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.333916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.334974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.334983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.335881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.335892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.336941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.336950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.337141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.337151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.337326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.337336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.337515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.337524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.337668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.337678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.337885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.337895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.338084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.338094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.338223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.338236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.338428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.338438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.288 [2024-07-15 22:05:08.338531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.288 [2024-07-15 22:05:08.338544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.288 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.338662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.338672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.338778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.338788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.338962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.338971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.339155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.339286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.339478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.339623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.339757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.339992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.340957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.340966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.341097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.341221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.341406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.341533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.341997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.342978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.342988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.343162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.343172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.343364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.343375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.343483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.343492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.343686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.343697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.343937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.343947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.344940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.344949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.289 [2024-07-15 22:05:08.345059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.289 [2024-07-15 22:05:08.345069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.289 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.345255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.345269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.345390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.345400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.345666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.345677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.345866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.345877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.346921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.346932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.347980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.347990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.348922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.348933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.349119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.349300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.349495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.349681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.349804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.349997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.350751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.350993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.351005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.351115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.351125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.351298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.351308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.290 [2024-07-15 22:05:08.351499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.290 [2024-07-15 22:05:08.351510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.290 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.351691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.351701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.351818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.351828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.352932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.352942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.353883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.353893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.354857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.354867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.355905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.355915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.356038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.356048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.356243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.356254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.356439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.356448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.356754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.356764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.356856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.356866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.357912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.357922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.358099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.358108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.291 qpair failed and we were unable to recover it. 00:27:14.291 [2024-07-15 22:05:08.358291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-07-15 22:05:08.358300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.358416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.358426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.358653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.358664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.358786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.358798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.359020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.359030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.359149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.359159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.359399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.359409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.359596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.359606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.359845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.359855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.360898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.360908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.361860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.361870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.362889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.362991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.363864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.364960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.364970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.292 [2024-07-15 22:05:08.365104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-07-15 22:05:08.365114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.292 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.365239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.365248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.365364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.365374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.365542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.365553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.365672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.365682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.365854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.365865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.366940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.366950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.367126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.367135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.367310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.367320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.367503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.367513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.367695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.367705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.367903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.368896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.368905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.369902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.369911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.370948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.370958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.371134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.371145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.371261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.371271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.371445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.371455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.371665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.293 [2024-07-15 22:05:08.371675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.293 qpair failed and we were unable to recover it. 00:27:14.293 [2024-07-15 22:05:08.371810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.371819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.371942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.371952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.372982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.372991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.373111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.373121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.373232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.373242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.373355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.373365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.373657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.373790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.373800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.374973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.374983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.375887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.375896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.376984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.376997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.377116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.377129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.294 qpair failed and we were unable to recover it. 00:27:14.294 [2024-07-15 22:05:08.377378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.294 [2024-07-15 22:05:08.377394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.377520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.377533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.377719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.377733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.377866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.377879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.378896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.378909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.379968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.379982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.380127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.380140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.380257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.380271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.380460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.380474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.380662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.380675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.380876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.380889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.381918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.381932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.382872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.382885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.383085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.383098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.383216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.383233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.383362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.383375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.383556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.383570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.383754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.383768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.384033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.384046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.384173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.295 [2024-07-15 22:05:08.384187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.295 qpair failed and we were unable to recover it. 00:27:14.295 [2024-07-15 22:05:08.384439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.384453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.384573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.384586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.384702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.384715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.384842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.384856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.384990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.385903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.386879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.386892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.387831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.387845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.388951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.388964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.389040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.389053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.389337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.389537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.389551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.389682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.389695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.389883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.389897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.390066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.390080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.390201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.390214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.390356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.390370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.390488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.390501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.296 qpair failed and we were unable to recover it. 00:27:14.296 [2024-07-15 22:05:08.390680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.296 [2024-07-15 22:05:08.390694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.390883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.391974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.391987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.392173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.392186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.392426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.392440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.392582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.392596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.392784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.392798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.392916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.392929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.393930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.393943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.394139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.394153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.394405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.394418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.394608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.394621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.394752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.394766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.394965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.394978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.395098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.395112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.395228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.395242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.395444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.395457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.395644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.395657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.395796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.395809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.396975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.396988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.397079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.397092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.397317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.397331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.397515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.397528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.397651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.397664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.397809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.397822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.297 qpair failed and we were unable to recover it. 00:27:14.297 [2024-07-15 22:05:08.398020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.297 [2024-07-15 22:05:08.398033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.398155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.398169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.398373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.398387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.398507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.398520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.398658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.398672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.398820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.398833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.399960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.399974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.400951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.400964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.401166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.401179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.401457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.401472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.401600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.401616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.401728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.401741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.401857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.401870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.402949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.403140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.403153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.403267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.403281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.403479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.403492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.403714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.403727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.403871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.403884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.404007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.404020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.404204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.298 [2024-07-15 22:05:08.404217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.298 qpair failed and we were unable to recover it. 00:27:14.298 [2024-07-15 22:05:08.404365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.404379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.404574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.404588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.404707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.404720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.404868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.404881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.405914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.406942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.406955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.407145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.407159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.407294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.407308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.407425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.407438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.407641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.407654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.407838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.407851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.408916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.408929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.409132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.409145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.409248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.409263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.409387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.409400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.409538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.409552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.409737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.409750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.410902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.410916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.411051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.411064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.411184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.411197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.411385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.299 [2024-07-15 22:05:08.411399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.299 qpair failed and we were unable to recover it. 00:27:14.299 [2024-07-15 22:05:08.411535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.411549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.411669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.411682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.411812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.411826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.411950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.411964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.412149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.412163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.412349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.412363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.412557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.412570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.412780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.412796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.412933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.412947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.413891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.413905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.414920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.414935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.415913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.415927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.416904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.416919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.417886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.417900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.418027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.418040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.300 [2024-07-15 22:05:08.418246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.300 [2024-07-15 22:05:08.418260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.300 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.418445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.418459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.418597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.418611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.418728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.418741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.418867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.418880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.418962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.418975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.419978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.419991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.420189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.420203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.420472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.420487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.420636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.420650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.420847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.420860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.421898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.421912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.422952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.422966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.423106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.423119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.423242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.423256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.423452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.423465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.423594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.423608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.423787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.423801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.424921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.424934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.425084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.425098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.425283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.425296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.425422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.425436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.301 [2024-07-15 22:05:08.425552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.301 [2024-07-15 22:05:08.425565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.301 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.425682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.425696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.425835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.426962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.426978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.427107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.427120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.427322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.427336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.427477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.427491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.427693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.427708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.427906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.427920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.428955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.428968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.429858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.429987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.430969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.430978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.431938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.302 [2024-07-15 22:05:08.431947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.302 qpair failed and we were unable to recover it. 00:27:14.302 [2024-07-15 22:05:08.432134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.432934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.432944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.433950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.433960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.434819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.434829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.435969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.435979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.436908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.436919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.437873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.437883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.438008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.438018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.438147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.438156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.438283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.438293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.438413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.438423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.303 [2024-07-15 22:05:08.438542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.303 [2024-07-15 22:05:08.438552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.303 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.438724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.438735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.438845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.438856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.439098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.439108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.439308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.439319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.439439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.439449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.439623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.439635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.439831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.439840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.440869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.440880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.441975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.441987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.442916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.442927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.443985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.443996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.444193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.444330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.444533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.444663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.444877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.444995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.304 [2024-07-15 22:05:08.445805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.304 [2024-07-15 22:05:08.445817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.304 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.445922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.445933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.446928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.446938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.447936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.447946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.448817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.448827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.449867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.449877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.450997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.451901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.451911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.305 [2024-07-15 22:05:08.452727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.305 qpair failed and we were unable to recover it. 00:27:14.305 [2024-07-15 22:05:08.452849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.452859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.452975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.452985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.453881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.453990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.454882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.454893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.455950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.456942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.456952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.457987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.457997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.306 qpair failed and we were unable to recover it. 00:27:14.306 [2024-07-15 22:05:08.458809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.306 [2024-07-15 22:05:08.458819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.458949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.458958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.459082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.459092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.459278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.459288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.459469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.459479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.459586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.459596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.459747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.459757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.460973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.460983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.461813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.461823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.462906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.462916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.307 [2024-07-15 22:05:08.463094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.307 [2024-07-15 22:05:08.463104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.307 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.463277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.463485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.463494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.463610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.463623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.463796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.463806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.463876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.463886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.464813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.464823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.465905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.465921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.466939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.466949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.467940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.467950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.468943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.468953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.469060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.469071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.469250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.308 [2024-07-15 22:05:08.469261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.308 qpair failed and we were unable to recover it. 00:27:14.308 [2024-07-15 22:05:08.469385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.469395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.469580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.469590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.469816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.469828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.469953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.469963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.470860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.470870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.471975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.471985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.472095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.472106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.472213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.472223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.472436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.472446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.472663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.472673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.472802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.472812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.473964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.473975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.474818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.474827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.475074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.475084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.475202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.475212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.475411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.475421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.475537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.309 [2024-07-15 22:05:08.475546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.309 qpair failed and we were unable to recover it. 00:27:14.309 [2024-07-15 22:05:08.475634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.475644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.475775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.475785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.475992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.476984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.476994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.477930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.477940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.478986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.478997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.479780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.479790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.480031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.480042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.480150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.480161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.310 [2024-07-15 22:05:08.480313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.310 [2024-07-15 22:05:08.480323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.310 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.480499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.480511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.480632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.480643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.480766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.480777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.480981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.480992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.481969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.481979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.482900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.482911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.483927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.483937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.590 qpair failed and we were unable to recover it. 00:27:14.590 [2024-07-15 22:05:08.484884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.590 [2024-07-15 22:05:08.484894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.485978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.485988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.486896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.486907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.487974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.487984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.488244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.488254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.488371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.488382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.488632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.488644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.488830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.488841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.489958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.489969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.490954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.490965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.491096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.591 [2024-07-15 22:05:08.491106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.591 qpair failed and we were unable to recover it. 00:27:14.591 [2024-07-15 22:05:08.491209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.491466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.491597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.491726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.491862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.491989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.491999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.492896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.492909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.493866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.493880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.494886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.494998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.495266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.495404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.495603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.495836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.495972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.495986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.496135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.496339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.496532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.496725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.496878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.496995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.497009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.497119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.497137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.497259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.497272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.497473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.497487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.592 [2024-07-15 22:05:08.497683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.592 [2024-07-15 22:05:08.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.592 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.497874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.497888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.497973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.497986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.498972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.498982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.499176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.499185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.499312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.499322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.499524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.499534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.499723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.499733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.499918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.499929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.500820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.500830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.501818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.501832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.502880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.502893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.503089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.503103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.503328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.503342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.503540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.503729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.503742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.503859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.503871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.504121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.504132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.504227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.504238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.504363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.504373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.504491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.504502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.593 [2024-07-15 22:05:08.504636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.593 [2024-07-15 22:05:08.504646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.593 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.504817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.504826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.505910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.505920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.506887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.506897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.507956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.507966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.508906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.508917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.509039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.509048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.594 qpair failed and we were unable to recover it. 00:27:14.594 [2024-07-15 22:05:08.509257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.594 [2024-07-15 22:05:08.509268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.509484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.509494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.509610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.509620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.509739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.509749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.509862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.509872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.510954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.510964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.511875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.511886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.512919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.512929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.513890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.513901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.514934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.515118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.515129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.515378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.515389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.595 [2024-07-15 22:05:08.515504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.595 [2024-07-15 22:05:08.515514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.595 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.515716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.515727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.515845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.515855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.515975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.515985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.516847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.516857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.517960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.517970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.518949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.518959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.519936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.519946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.520873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.520883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.521072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.596 [2024-07-15 22:05:08.521082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.596 qpair failed and we were unable to recover it. 00:27:14.596 [2024-07-15 22:05:08.521262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.521273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.521395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.521405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.521532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.521542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.521668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.521679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.521860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.521870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.522872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.522884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.523850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.523860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.524894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.524999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.525175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.525365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.525586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.525717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.525874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.525885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.526984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.526994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.527118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.527129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.527310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.527323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.527438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.527449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.527694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.597 [2024-07-15 22:05:08.527705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.597 qpair failed and we were unable to recover it. 00:27:14.597 [2024-07-15 22:05:08.527813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.527824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.527952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.527962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.528945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.528956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.529172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.529183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.529377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.529388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.529550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.529561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.529651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.529662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.529835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.529846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.530813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.530822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.531943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.531954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.532962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.532972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.598 [2024-07-15 22:05:08.533087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.598 [2024-07-15 22:05:08.533097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.598 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.533974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.533983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.534934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.534945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.535939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.535950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.536883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.536893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.599 [2024-07-15 22:05:08.537896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.599 [2024-07-15 22:05:08.537907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.599 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.538967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.538978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.539917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.539927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.540965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.540976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.541982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.541993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.542962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.542983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.543090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.543105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.543292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.543307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.600 qpair failed and we were unable to recover it. 00:27:14.600 [2024-07-15 22:05:08.543426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.600 [2024-07-15 22:05:08.543441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.543577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.543591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.543770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.543784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.543866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.543880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.544065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.544079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.544273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.544289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.544412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.544427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.544616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.544630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.544816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.544830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.545078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.545092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.545209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.545223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.545357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.545371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.545560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.545574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.545817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.545832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.546084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.546098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.546322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.546336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.546456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.546470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.546717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.546731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.546939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.546954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.547150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.547165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.547346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.547361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.547551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.547566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.547691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.547705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.547908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.547923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.548190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.548207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.548424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.548438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.548635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.548650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.548929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.548944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.549973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.549988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.550179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.550193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.550346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.550361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.550478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.550493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.550677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.550691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.550808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.550823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.551002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.551015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.551102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.601 [2024-07-15 22:05:08.551112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.601 qpair failed and we were unable to recover it. 00:27:14.601 [2024-07-15 22:05:08.551216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.551231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.551426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.551438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.551609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.551620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.551770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.551781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.551903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.551914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.552874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.552890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.553845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.553859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.554800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.554995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.555194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.555346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.555498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.555697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.555902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.555916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.556041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.556055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.556245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.556259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.556457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.556471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.556674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.556688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.556826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.556840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.557022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.557036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.557283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.557298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.557491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.557505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.557707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.557724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.557849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.557863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.558000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.558015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.558141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.602 [2024-07-15 22:05:08.558155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.602 qpair failed and we were unable to recover it. 00:27:14.602 [2024-07-15 22:05:08.558286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.558300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.558431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.558445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.558651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.558665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.558854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.558868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.559923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.559938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.560937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.560952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.561079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.561093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.561261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.561409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.561422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.561615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.561887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.561902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.562890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.562904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.563880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.563894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.564026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.564236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.564251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.603 [2024-07-15 22:05:08.564368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.603 [2024-07-15 22:05:08.564383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.603 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.564508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.564523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.564641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.564656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.564810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.564824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.564963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.564977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.565937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.565951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.566981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.566995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.567971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.567985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.568960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.568979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.569196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.569211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.569449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.569471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.569618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.569633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.569842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.569856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.569981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.569996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.570120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.570134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.570334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.570351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.570571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.570585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.570774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.570789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.570982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.570997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.571118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.571132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.604 [2024-07-15 22:05:08.571314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.604 [2024-07-15 22:05:08.571330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.604 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.571451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.571469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.571675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.571690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.571899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.571913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.572940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.572954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.573875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.573889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.574929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.574943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72d0000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.575916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.575927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.576894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.576905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.577027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.577162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.577350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.577539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.605 [2024-07-15 22:05:08.577727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.605 qpair failed and we were unable to recover it. 00:27:14.605 [2024-07-15 22:05:08.577901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.577914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.578818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.578829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.579910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.579920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.580884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.580895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.581951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.581962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.582903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.582916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.583941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.583955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.584090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.584104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.584287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.584301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.584505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.584519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.606 [2024-07-15 22:05:08.584700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.606 [2024-07-15 22:05:08.584717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.606 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.584899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.584912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c0000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.585981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.585991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.586958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.586968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.587946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.587956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.588951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.588961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.589195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.589206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.589344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.589354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.589539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.589550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.589739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.589748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.589890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.589900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.590073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.590083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.590280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.590290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.590454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.590464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.607 [2024-07-15 22:05:08.590584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.607 [2024-07-15 22:05:08.590594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.607 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.590711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.590722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.590858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.590868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.591982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.591992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.592943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.592953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.593939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.593949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.594917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.594927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.595924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.595935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.596969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.596979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.597109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.608 [2024-07-15 22:05:08.597119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.608 qpair failed and we were unable to recover it. 00:27:14.608 [2024-07-15 22:05:08.597386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.597396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.597520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.597670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.597680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.597882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.597892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.598916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.598926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.599113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.599123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.599304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.599313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.599444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.599454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.599625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.599635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.599763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.599773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.600939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.600948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.601927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.601937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.602984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.602994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.609 [2024-07-15 22:05:08.603900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.609 [2024-07-15 22:05:08.603910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.609 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.604981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.604991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.605950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.605960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.606145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.606156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.606417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.606427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.606567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.606577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.606757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.606767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.606965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.606975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.607112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.607293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.607482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.607681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.607819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.607996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.608253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.608379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.608598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.608732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.608926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.608936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.609891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.609902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.610 [2024-07-15 22:05:08.610777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.610 qpair failed and we were unable to recover it. 00:27:14.610 [2024-07-15 22:05:08.610900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.610910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.611944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.611955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.612130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.612140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.612286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.612296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.612405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.612415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.612603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.612613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.612834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.612844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.613951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.613962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.614140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.614268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.614386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.614517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.611 [2024-07-15 22:05:08.614769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.614899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.614909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:14.611 [2024-07-15 22:05:08.615037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.615048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.615253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.615263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.611 [2024-07-15 22:05:08.615344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.615355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.615557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.615568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.611 [2024-07-15 22:05:08.615759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.615770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.615868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.611 [2024-07-15 22:05:08.615879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.616936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.616946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.617049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.617059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.617181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.611 [2024-07-15 22:05:08.617191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.611 qpair failed and we were unable to recover it. 00:27:14.611 [2024-07-15 22:05:08.617312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.617322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.617438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.617448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.617568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.617578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.617720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.617730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.617839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.617849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.618875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.618885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.619963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.619973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.620937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.620947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.621971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.621982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.622177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.612 qpair failed and we were unable to recover it. 00:27:14.612 [2024-07-15 22:05:08.622308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.612 [2024-07-15 22:05:08.622321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.622424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.622435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.622642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.622652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.622783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.622793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.622918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.622929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.623940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.623950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.624946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.624957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.625916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.625929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.626827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.626842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.627881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.627896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.628022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.628032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.628149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.628160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.628281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.628292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.628421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.613 [2024-07-15 22:05:08.628431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.613 qpair failed and we were unable to recover it. 00:27:14.613 [2024-07-15 22:05:08.628604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.628614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.628729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.628740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.628921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.628930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.629896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.629908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.630962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.630972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.631960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.631969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.632887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.632897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.633896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.633994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.614 [2024-07-15 22:05:08.634005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.614 qpair failed and we were unable to recover it. 00:27:14.614 [2024-07-15 22:05:08.634098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.634913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.634924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.635875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.635994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.636913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.636923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.637934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.637944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.638961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.638971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.639083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.639094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.639212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.639222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.615 qpair failed and we were unable to recover it. 00:27:14.615 [2024-07-15 22:05:08.639341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.615 [2024-07-15 22:05:08.639353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.639522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.639533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.639638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.639647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.639813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.639823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.639946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.639956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.640947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.640957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.641825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.641837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.642886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.642896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.643950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.643960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.644087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.616 [2024-07-15 22:05:08.644098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.616 qpair failed and we were unable to recover it. 00:27:14.616 [2024-07-15 22:05:08.644208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.644218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.644430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.644440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.644558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.644567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.644675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.644685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.644813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.644823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.645855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.645996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.646893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.646907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7ffc0 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.647961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.647971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.648966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.648976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.649160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.649170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.649302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.649312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.649443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.649454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.649590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.649600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.649776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.649786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.650055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.650065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.617 [2024-07-15 22:05:08.650250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.617 [2024-07-15 22:05:08.650260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.617 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.650368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.650378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.650493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.650503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.650698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.650707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.650822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.650832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.618 [2024-07-15 22:05:08.651426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.618 [2024-07-15 22:05:08.651696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.651919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.651929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.618 [2024-07-15 22:05:08.652080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.652167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.618 [2024-07-15 22:05:08.652298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.652497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.652745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.652864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.652873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.652997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.653843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.653853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.654921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.654931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.618 [2024-07-15 22:05:08.655943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.618 [2024-07-15 22:05:08.655953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.618 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.656954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.656964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.657983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.657992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.658874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.658995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.659936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.659946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.660877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.660886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.661971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.619 [2024-07-15 22:05:08.661981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.619 qpair failed and we were unable to recover it. 00:27:14.619 [2024-07-15 22:05:08.662096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.662865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.662875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.663887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.663897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.664904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.664915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.665177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.665187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.665488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.665499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.665694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.665704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.665901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.665912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.666919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.667204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.667217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.667309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.667320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.667441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.667452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.667742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.667753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.620 qpair failed and we were unable to recover it. 00:27:14.620 [2024-07-15 22:05:08.667876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.620 [2024-07-15 22:05:08.667887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.668931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.668941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.669964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.669974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.670212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.670348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.670357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.670530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.670540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 Malloc0 00:27:14.621 [2024-07-15 22:05:08.670662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.670671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.670807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.670817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.671005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.671127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.621 [2024-07-15 22:05:08.671313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.671453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.671575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.621 [2024-07-15 22:05:08.671692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.671820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.671831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.621 [2024-07-15 22:05:08.672035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.672045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.672154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.621 [2024-07-15 22:05:08.672164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.672288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.672298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.672480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.672490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.672608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.672618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.672826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.672836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.673869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.673878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.621 [2024-07-15 22:05:08.674052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.621 [2024-07-15 22:05:08.674062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.621 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.674182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.674389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.674517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.674674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.674805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.674990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.675866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.675875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.676977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.676986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.677174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.677285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.677295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.677507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.677517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.677695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.677706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.677893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.622 [2024-07-15 22:05:08.678301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.678953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.678965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.679835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.679845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.680032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.680042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.680150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.680161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.680314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.680326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.680483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.622 [2024-07-15 22:05:08.680494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.622 qpair failed and we were unable to recover it. 00:27:14.622 [2024-07-15 22:05:08.680704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.680715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.680850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.680860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.680986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.680996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.681970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.681980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.682912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.682921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.683970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.683979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.684989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.684998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.685907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.685917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.686028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.686176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.686187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.686305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.686315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.623 [2024-07-15 22:05:08.686422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.623 [2024-07-15 22:05:08.686431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.623 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.686535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.686545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.624 [2024-07-15 22:05:08.686725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.686736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.686842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.686851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.686967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.686979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.624 [2024-07-15 22:05:08.687094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.687255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.624 [2024-07-15 22:05:08.687442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.687574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.624 [2024-07-15 22:05:08.687706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.687837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.687951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.687960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.688932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.688942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.689978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.689988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.690974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.690983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.691975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.691985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.624 qpair failed and we were unable to recover it. 00:27:14.624 [2024-07-15 22:05:08.692097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.624 [2024-07-15 22:05:08.692106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.692891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.692900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.693950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.693960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.694148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.694337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.694474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.694661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.625 [2024-07-15 22:05:08.694862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.694985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.694995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.625 [2024-07-15 22:05:08.695118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.695311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.625 [2024-07-15 22:05:08.695451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.695579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.625 [2024-07-15 22:05:08.695717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.695804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.695987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.695996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.696946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.696956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.625 [2024-07-15 22:05:08.697089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.625 [2024-07-15 22:05:08.697100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.625 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.697195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.697205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.697465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.697476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.697662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.697673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.697801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.697811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.697926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.697936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.698857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.698866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.699902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.699912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.700962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.700972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.701157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.701167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.701362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.701373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.701578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.701588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.701829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.701839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.701952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.701962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.702860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.702870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.703001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.703011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.626 [2024-07-15 22:05:08.703134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.703145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 [2024-07-15 22:05:08.703279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.626 [2024-07-15 22:05:08.703289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.626 qpair failed and we were unable to recover it. 00:27:14.626 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.626 [2024-07-15 22:05:08.703471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.703590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.703601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.627 [2024-07-15 22:05:08.703736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.703746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.703935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.703944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.704064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.704313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.704523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.704737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.704873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.704992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.705869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.705998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.706007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.706148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.706157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.706278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.627 [2024-07-15 22:05:08.706288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f72c8000b90 with addr=10.0.0.2, port=4420 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.706422] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.627 [2024-07-15 22:05:08.708718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.708802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.708824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.708832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.708838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.708858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.627 [2024-07-15 22:05:08.718723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.627 [2024-07-15 22:05:08.718805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.718824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.718831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.718838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.718858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 22:05:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3852297 00:27:14.627 [2024-07-15 22:05:08.728739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.728827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.728845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.728853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.728860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.728876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.738717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.738838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.738855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.738862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.738869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.738884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.748801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.748875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.748891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.748899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.748905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.748920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.758739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.758808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.758824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.758831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.627 [2024-07-15 22:05:08.758837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.627 [2024-07-15 22:05:08.758853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.627 qpair failed and we were unable to recover it. 00:27:14.627 [2024-07-15 22:05:08.768780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.627 [2024-07-15 22:05:08.768866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.627 [2024-07-15 22:05:08.768882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.627 [2024-07-15 22:05:08.768889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.628 [2024-07-15 22:05:08.768895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.628 [2024-07-15 22:05:08.768910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-07-15 22:05:08.778788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.628 [2024-07-15 22:05:08.778876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.628 [2024-07-15 22:05:08.778892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.628 [2024-07-15 22:05:08.778899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.628 [2024-07-15 22:05:08.778905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.628 [2024-07-15 22:05:08.778920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-07-15 22:05:08.788823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.628 [2024-07-15 22:05:08.788902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.628 [2024-07-15 22:05:08.788918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.628 [2024-07-15 22:05:08.788925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.628 [2024-07-15 22:05:08.788932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.628 [2024-07-15 22:05:08.788947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-07-15 22:05:08.798863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.628 [2024-07-15 22:05:08.798974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.628 [2024-07-15 22:05:08.798990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.628 [2024-07-15 22:05:08.798998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.628 [2024-07-15 22:05:08.799006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.628 [2024-07-15 22:05:08.799021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.628 [2024-07-15 22:05:08.808888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.628 [2024-07-15 22:05:08.808957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.628 [2024-07-15 22:05:08.808973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.628 [2024-07-15 22:05:08.808984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.628 [2024-07-15 22:05:08.808990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.628 [2024-07-15 22:05:08.809005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.628 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.818872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.818941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.818957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.818964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.818971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.818986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.828956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.829028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.829044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.829052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.829058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.829073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.838999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.839068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.839084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.839091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.839097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.839112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.849035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.849116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.849131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.849138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.849144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.849159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.858998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.859068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.859084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.859090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.859097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.859111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.869091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.869158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.869173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.869180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.869186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.869201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.879096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.879165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.879180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.879187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.879194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.879209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.889119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.889206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.889220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.889232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.889239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.889254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.899091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.887 [2024-07-15 22:05:08.899158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.887 [2024-07-15 22:05:08.899174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.887 [2024-07-15 22:05:08.899185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.887 [2024-07-15 22:05:08.899191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.887 [2024-07-15 22:05:08.899206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.887 qpair failed and we were unable to recover it. 00:27:14.887 [2024-07-15 22:05:08.909175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.909255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.909271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.909278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.909284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.909299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.919201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.919283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.919299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.919306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.919312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.919327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.929242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.929311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.929327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.929334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.929340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.929355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.939252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.939324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.939341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.939349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.939355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.939371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.949436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.949514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.949529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.949536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.949542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.949557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.959381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.959490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.959507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.959514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.959521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.959536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.969411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.969490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.969506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.969514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.969520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.969535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.979409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.979487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.979502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.979509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.979516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.979530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.989439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.989573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.989593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.989600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.989606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.989622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:08.999472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:08.999547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:08.999562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:08.999570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:08.999576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:08.999591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:09.009485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:09.009553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:09.009568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:09.009575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:09.009581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:09.009596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:09.019484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:09.019557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:09.019574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:09.019581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:09.019587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:09.019603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:09.029465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:09.029534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:09.029549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:09.029556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:09.029562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:09.029582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.888 qpair failed and we were unable to recover it. 00:27:14.888 [2024-07-15 22:05:09.039545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.888 [2024-07-15 22:05:09.039615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.888 [2024-07-15 22:05:09.039631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.888 [2024-07-15 22:05:09.039638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.888 [2024-07-15 22:05:09.039645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.888 [2024-07-15 22:05:09.039660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.049569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.049651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.049666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.049673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.049680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.049694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.059582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.059651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.059666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.059673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.059679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.059693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.069632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.069701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.069716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.069723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.069730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.069744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.079615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.079722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.079741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.079748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.079754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.079769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.089706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.089787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.089802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.089810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.089817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.089831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.099728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.099798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.099813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.099821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.099827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.099842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.109743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.109817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.109832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.109840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.109846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.109861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:14.889 [2024-07-15 22:05:09.119757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.889 [2024-07-15 22:05:09.119820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.889 [2024-07-15 22:05:09.119835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.889 [2024-07-15 22:05:09.119843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.889 [2024-07-15 22:05:09.119852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:14.889 [2024-07-15 22:05:09.119867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.889 qpair failed and we were unable to recover it. 00:27:15.149 [2024-07-15 22:05:09.129812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.149 [2024-07-15 22:05:09.129909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.149 [2024-07-15 22:05:09.129924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.149 [2024-07-15 22:05:09.129931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.129938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.129953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.139828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.139897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.139913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.139919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.139926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.139941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.149837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.149906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.149921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.149928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.149935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.149949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.159900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.159972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.159988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.159995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.160001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.160016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.169955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.170025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.170041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.170048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.170054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.170069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.179916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.179991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.180006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.180013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.180019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.180034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.189972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.190039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.190054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.190062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.190068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.190083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.200012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.200084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.200100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.200108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.200114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.200129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.210018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.210087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.210102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.210113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.210119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.210134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.220061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.220141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.220156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.220164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.220170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.220185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.230099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.230173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.230188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.230196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.230202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.230217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.240114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.240198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.240213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.240220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.240230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.240246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.250133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.250197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.250212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.250219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.250229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.250245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.260171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.260246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.260262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.260269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.260275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.150 [2024-07-15 22:05:09.260290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-07-15 22:05:09.270213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.150 [2024-07-15 22:05:09.270292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.150 [2024-07-15 22:05:09.270309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.150 [2024-07-15 22:05:09.270317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.150 [2024-07-15 22:05:09.270324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.270339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.280276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.280358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.280373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.280381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.280387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.280402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.290270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.290358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.290375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.290382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.290389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.290404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.300298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.300372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.300388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.300398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.300404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.300419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.310329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.310407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.310424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.310431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.310437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.310452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.320389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.320465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.320481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.320489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.320495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.320510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.330397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.330466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.330482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.330490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.330497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.330512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.340418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.340490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.340505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.340512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.340518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.340533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.350449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.350520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.350535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.350542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.350548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.350563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.360443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.360511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.360526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.360534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.360540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.360555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.370512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.370593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.370610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.370617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.370624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.370639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-07-15 22:05:09.380535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.151 [2024-07-15 22:05:09.380606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.151 [2024-07-15 22:05:09.380621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.151 [2024-07-15 22:05:09.380628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.151 [2024-07-15 22:05:09.380634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.151 [2024-07-15 22:05:09.380649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.390550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.390621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.390640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.390648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.390654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.390669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.400630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.400740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.400757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.400765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.400772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.400786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.410633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.410705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.410721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.410729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.410735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.410750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.420704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.420784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.420799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.420806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.420812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.420827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.430698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.430771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.430787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.430795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.430802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.430820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.440715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.440782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.440798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.440805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.440811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.440826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.450751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.450829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.450844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.450852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.450858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.450873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.460775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.460845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.460860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.460867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.460874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.460888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.470809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.470890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.470905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.470912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.470918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.470934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.480859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.480925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.480943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.480950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.480956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.480971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.490783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.490856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.490871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.490878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.490885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.490899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.500887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.500957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.500972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.412 [2024-07-15 22:05:09.500980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.412 [2024-07-15 22:05:09.500986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.412 [2024-07-15 22:05:09.501000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-07-15 22:05:09.510956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.412 [2024-07-15 22:05:09.511029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.412 [2024-07-15 22:05:09.511045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.511053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.511059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.511074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.520955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.521038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.521053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.521060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.521069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.521086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.530978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.531047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.531062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.531069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.531076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.531090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.541070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.541136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.541151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.541158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.541164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.541180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.551031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.551158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.551174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.551181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.551189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.551205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.561058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.561125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.561141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.561148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.561154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.561169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.571096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.571173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.571188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.571195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.571202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.571216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.581120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.581189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.581205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.581212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.581218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.581237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.591179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.591256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.591272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.591280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.591286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.591301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.601187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.601316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.601333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.601340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.601347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.601365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.611243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.611354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.611371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.611378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.611389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.611404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.621236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.621309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.621324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.621331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.621337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.621352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.631283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.631364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.631380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.631387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.631393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.631408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.641288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.641359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.641375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.641382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.641388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.413 [2024-07-15 22:05:09.641403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-07-15 22:05:09.651330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.413 [2024-07-15 22:05:09.651443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.413 [2024-07-15 22:05:09.651460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.413 [2024-07-15 22:05:09.651467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.413 [2024-07-15 22:05:09.651474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.414 [2024-07-15 22:05:09.651489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.661343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.661413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.661429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.661436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.661443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.661459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.671386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.671459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.671475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.671482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.671489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.671504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.681426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.681504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.681519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.681527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.681533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.681548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.691502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.691613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.691631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.691638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.691645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.691660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.701475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.701546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.701560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.701575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.701581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.701596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.711515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.711585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.711600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.711607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.711613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.711629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.721568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.721634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.721649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.721657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.721663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.721678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.731626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.731729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.731745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.731752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.731758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.731774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.741573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.741645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.741662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.741669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.741676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.741692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.751542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.751614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.751629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.751637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.751643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.751659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.761642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.761727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.761745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.761752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.761758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.761773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.771676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.674 [2024-07-15 22:05:09.771744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.674 [2024-07-15 22:05:09.771759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.674 [2024-07-15 22:05:09.771766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.674 [2024-07-15 22:05:09.771772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.674 [2024-07-15 22:05:09.771788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.674 qpair failed and we were unable to recover it. 00:27:15.674 [2024-07-15 22:05:09.781668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.781789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.781805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.781813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.781819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.781834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.791665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.791742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.791761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.791768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.791775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.791789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.801761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.801836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.801853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.801862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.801869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.801884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.811821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.811928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.811943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.811950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.811956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.811971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.821843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.821959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.821975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.821982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.821990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.822005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.831875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.831981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.831997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.832004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.832011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.832029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.841887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.841955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.841971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.841978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.841984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.841999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.851905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.851978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.851993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.852000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.852006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.852021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.861979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.862047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.862062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.862070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.862076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.862090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.872013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.872126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.872142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.872150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.872156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.872171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.882002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.882072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.882090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.882097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.882104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.882119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.892034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.892103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.892119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.892126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.892133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.892149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.902060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.902128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.902143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.902150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.902156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.902171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.675 [2024-07-15 22:05:09.912105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.675 [2024-07-15 22:05:09.912183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.675 [2024-07-15 22:05:09.912199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.675 [2024-07-15 22:05:09.912207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.675 [2024-07-15 22:05:09.912213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.675 [2024-07-15 22:05:09.912231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.675 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.922128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.922195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.922209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.922217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.922231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.922246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.932160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.932242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.932257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.932264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.932271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.932286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.942170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.942243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.942259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.942266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.942272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.942287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.952212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.952282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.952297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.952305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.952311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.952325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.962232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.962297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.962312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.962320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.962326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.962342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.972270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.972341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.972358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.972365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.972372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.972387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.982238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.982309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.982324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.982331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.982337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.982352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:09.992301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:09.992369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:09.992384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:09.992392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:09.992398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:09.992413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:10.002386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:10.002459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:10.002475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:10.002483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:10.002489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:10.002504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:10.012376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:10.012456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:10.012477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:10.012486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:10.012497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:10.012514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:10.022398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:10.022469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:10.022485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:10.022492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:10.022499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:10.022514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.936 [2024-07-15 22:05:10.032460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.936 [2024-07-15 22:05:10.032537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.936 [2024-07-15 22:05:10.032555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.936 [2024-07-15 22:05:10.032563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.936 [2024-07-15 22:05:10.032569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.936 [2024-07-15 22:05:10.032585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.936 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.042450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.042521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.042538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.042545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.042552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.042568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.053599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.053686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.053705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.053714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.053721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.053739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.062486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.062569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.062586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.062593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.062599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.062615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.072514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.072581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.072599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.072606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.072613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.072629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.082602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.082678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.082695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.082704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.082711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.082727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.092615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.092687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.092702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.092709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.092716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.092731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.102658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.102744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.102760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.102770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.102776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.102792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.112651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.112717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.112733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.112740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.112747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.112761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.122650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.122719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.122736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.122743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.122750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.122765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.132680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.132750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.132764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.132772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.132778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.132792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.142746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.142814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.142829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.142836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.142842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.142857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.152729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.152803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.152819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.152827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.152834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.152848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.162813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.162876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.162891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.162899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.162906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.162920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:15.937 [2024-07-15 22:05:10.172781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.937 [2024-07-15 22:05:10.172852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.937 [2024-07-15 22:05:10.172867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.937 [2024-07-15 22:05:10.172874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.937 [2024-07-15 22:05:10.172880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:15.937 [2024-07-15 22:05:10.172895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.937 qpair failed and we were unable to recover it. 00:27:16.196 [2024-07-15 22:05:10.182820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.196 [2024-07-15 22:05:10.182892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.196 [2024-07-15 22:05:10.182907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.196 [2024-07-15 22:05:10.182915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.196 [2024-07-15 22:05:10.182921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.196 [2024-07-15 22:05:10.182936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.196 qpair failed and we were unable to recover it. 00:27:16.196 [2024-07-15 22:05:10.192916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.196 [2024-07-15 22:05:10.192986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.196 [2024-07-15 22:05:10.193005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.196 [2024-07-15 22:05:10.193014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.196 [2024-07-15 22:05:10.193021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.193037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.202964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.203034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.203049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.203057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.203063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.203077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.212972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.213045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.213060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.213067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.213074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.213088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.223010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.223079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.223094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.223101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.223107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.223122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.233027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.233105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.233120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.233127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.233134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.233151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.243071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.243176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.243191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.243198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.243206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.243221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.253042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.253115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.253130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.253138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.253144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.253159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.263053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.263119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.263135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.263142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.263149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.263163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.273139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.273209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.273228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.273236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.273243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.273258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.283099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.283172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.283191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.283199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.283205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.283219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.293187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.293260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.293276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.293283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.293289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.293304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.303299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.303369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.303384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.303392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.303398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.303413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.313173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.313251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.313267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.197 [2024-07-15 22:05:10.313275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.197 [2024-07-15 22:05:10.313281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.197 [2024-07-15 22:05:10.313296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.197 qpair failed and we were unable to recover it. 00:27:16.197 [2024-07-15 22:05:10.323300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.197 [2024-07-15 22:05:10.323369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.197 [2024-07-15 22:05:10.323384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.323392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.323398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.323416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.333311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.333382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.333397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.333405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.333411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.333425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.343324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.343396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.343412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.343419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.343426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.343441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.353365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.353437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.353454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.353462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.353469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.353484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.363402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.363468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.363484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.363491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.363498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.363514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.373428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.373501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.373518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.373525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.373532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.373547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.383444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.383517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.383533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.383540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.383547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.383561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.393465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.393537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.393552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.393560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.393568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.393583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.403517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.403589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.403604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.403612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.403619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.403634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.413526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.413638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.413655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.413663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.413673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.413689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.423563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.423633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.423649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.423656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.423663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.423678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.198 [2024-07-15 22:05:10.433608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.198 [2024-07-15 22:05:10.433677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.198 [2024-07-15 22:05:10.433692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.198 [2024-07-15 22:05:10.433699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.198 [2024-07-15 22:05:10.433705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.198 [2024-07-15 22:05:10.433720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.198 qpair failed and we were unable to recover it. 00:27:16.468 [2024-07-15 22:05:10.443636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-07-15 22:05:10.443716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-07-15 22:05:10.443733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-07-15 22:05:10.443740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-07-15 22:05:10.443748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.468 [2024-07-15 22:05:10.443763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-07-15 22:05:10.453624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-07-15 22:05:10.453701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-07-15 22:05:10.453717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-07-15 22:05:10.453725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-07-15 22:05:10.453732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.468 [2024-07-15 22:05:10.453748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-07-15 22:05:10.463682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-07-15 22:05:10.463757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-07-15 22:05:10.463773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-07-15 22:05:10.463781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-07-15 22:05:10.463788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.468 [2024-07-15 22:05:10.463803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-07-15 22:05:10.473718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-07-15 22:05:10.473794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-07-15 22:05:10.473810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.468 [2024-07-15 22:05:10.473817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.468 [2024-07-15 22:05:10.473824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.468 [2024-07-15 22:05:10.473839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.468 qpair failed and we were unable to recover it. 00:27:16.468 [2024-07-15 22:05:10.483735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.468 [2024-07-15 22:05:10.483809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.468 [2024-07-15 22:05:10.483824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.483832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.483838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.483853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.493775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.493868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.493883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.493890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.493897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.493913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.503801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.503868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.503883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.503896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.503903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.503918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.513842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.513921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.513939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.513946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.513953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.513968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.523865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.523941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.523956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.523963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.523969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.523984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.533826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.533894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.533909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.533916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.533922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.533937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.543923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.543991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.544006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.544014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.544021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.544037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.553967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.554080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.554098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.554105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.554111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.554127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.563995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.564060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.564076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.564083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.564090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.469 [2024-07-15 22:05:10.564105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.469 qpair failed and we were unable to recover it. 00:27:16.469 [2024-07-15 22:05:10.574057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.469 [2024-07-15 22:05:10.574128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.469 [2024-07-15 22:05:10.574143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.469 [2024-07-15 22:05:10.574150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.469 [2024-07-15 22:05:10.574157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.574172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.584041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.584113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.584128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.584136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.584142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.584156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.594078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.594152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.594167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.594178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.594184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.594199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.604106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.604173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.604188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.604195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.604201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.604216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.614149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.614221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.614240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.614248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.614254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.614269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.624150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.624220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.624240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.624248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.624254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.624270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.634112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.634180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.634195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.634202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.634208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.634223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.644203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.644284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.644300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.644307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.644314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.644329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.654185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.654257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.654273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.654280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.654287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.654302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.664280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.470 [2024-07-15 22:05:10.664346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.470 [2024-07-15 22:05:10.664361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.470 [2024-07-15 22:05:10.664369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.470 [2024-07-15 22:05:10.664374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.470 [2024-07-15 22:05:10.664390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.470 qpair failed and we were unable to recover it. 00:27:16.470 [2024-07-15 22:05:10.674317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.471 [2024-07-15 22:05:10.674390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.471 [2024-07-15 22:05:10.674406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.471 [2024-07-15 22:05:10.674413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.471 [2024-07-15 22:05:10.674420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.471 [2024-07-15 22:05:10.674434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.471 qpair failed and we were unable to recover it. 00:27:16.471 [2024-07-15 22:05:10.684355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.471 [2024-07-15 22:05:10.684423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.471 [2024-07-15 22:05:10.684441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.471 [2024-07-15 22:05:10.684448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.471 [2024-07-15 22:05:10.684455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.471 [2024-07-15 22:05:10.684470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.471 qpair failed and we were unable to recover it. 00:27:16.471 [2024-07-15 22:05:10.694415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.471 [2024-07-15 22:05:10.694486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.471 [2024-07-15 22:05:10.694501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.471 [2024-07-15 22:05:10.694508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.471 [2024-07-15 22:05:10.694514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.471 [2024-07-15 22:05:10.694528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.471 qpair failed and we were unable to recover it. 00:27:16.471 [2024-07-15 22:05:10.704399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.471 [2024-07-15 22:05:10.704516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.471 [2024-07-15 22:05:10.704532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.471 [2024-07-15 22:05:10.704539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.471 [2024-07-15 22:05:10.704546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.471 [2024-07-15 22:05:10.704561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.471 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.714480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.714551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.714567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.714574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.714580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.714595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.724516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.724588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.724604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.724611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.724617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.724636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.734538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.734610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.734626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.734633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.734639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.734654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.744525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.744612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.744628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.744635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.744641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.744656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.754537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.754611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.754628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.754636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.754642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.754657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.764590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.764662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.764678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.764685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.764692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.764706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.774607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.774722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.774742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.774749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.774756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.774773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.784635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.784707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.784722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.784729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.731 [2024-07-15 22:05:10.784735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.731 [2024-07-15 22:05:10.784750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-15 22:05:10.794668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.731 [2024-07-15 22:05:10.794752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.731 [2024-07-15 22:05:10.794767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.731 [2024-07-15 22:05:10.794775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.794781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.794796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.804624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.804694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.804711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.804719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.804726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.804741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.814766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.814874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.814891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.814898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.814907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.814922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.824753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.824823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.824840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.824847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.824854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.824870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.834802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.834866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.834882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.834889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.834895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.834910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.844858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.844963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.844978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.844986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.844992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.845008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.854850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.854929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.854944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.854951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.854958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.854972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.864866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.864951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.864966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.864974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.864980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.864994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.874902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.874980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.874995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.875002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.875009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.875023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.884917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.884997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.885012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.885019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.885026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.885041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.894966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.895035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.895050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.895057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.895064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.895078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.904984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.905055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.905072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.905082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.905089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.905104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.915015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.915090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.915105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.915112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.915118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.915133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.925044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.925123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.925139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.925146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.925152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.925167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-15 22:05:10.935075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.732 [2024-07-15 22:05:10.935146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.732 [2024-07-15 22:05:10.935161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.732 [2024-07-15 22:05:10.935168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.732 [2024-07-15 22:05:10.935175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.732 [2024-07-15 22:05:10.935190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-15 22:05:10.945096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.733 [2024-07-15 22:05:10.945162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.733 [2024-07-15 22:05:10.945178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.733 [2024-07-15 22:05:10.945186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.733 [2024-07-15 22:05:10.945192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.733 [2024-07-15 22:05:10.945207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-15 22:05:10.955089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.733 [2024-07-15 22:05:10.955159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.733 [2024-07-15 22:05:10.955174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.733 [2024-07-15 22:05:10.955181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.733 [2024-07-15 22:05:10.955188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.733 [2024-07-15 22:05:10.955202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-15 22:05:10.965169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.733 [2024-07-15 22:05:10.965252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.733 [2024-07-15 22:05:10.965268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.733 [2024-07-15 22:05:10.965275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.733 [2024-07-15 22:05:10.965281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.733 [2024-07-15 22:05:10.965296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.992 [2024-07-15 22:05:10.975176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.992 [2024-07-15 22:05:10.975261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.992 [2024-07-15 22:05:10.975276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.992 [2024-07-15 22:05:10.975283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.992 [2024-07-15 22:05:10.975289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.992 [2024-07-15 22:05:10.975305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-07-15 22:05:10.985223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.992 [2024-07-15 22:05:10.985303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.992 [2024-07-15 22:05:10.985317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.992 [2024-07-15 22:05:10.985324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.992 [2024-07-15 22:05:10.985331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.992 [2024-07-15 22:05:10.985345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-07-15 22:05:10.995248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.992 [2024-07-15 22:05:10.995319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.992 [2024-07-15 22:05:10.995334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.992 [2024-07-15 22:05:10.995344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.992 [2024-07-15 22:05:10.995351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.992 [2024-07-15 22:05:10.995365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.005205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.005275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.005290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.005297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.005304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.005318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.015305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.015379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.015395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.015402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.015409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.015424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.025334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.025411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.025426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.025434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.025440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.025455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.035365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.035436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.035451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.035458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.035465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.035479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.045406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.045521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.045538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.045547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.045554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.045569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.055413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.055478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.055493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.055501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.055507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.055522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.065456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.065574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.065591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.065599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.065606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.065622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.075486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.075569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.075584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.075592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.075598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.075613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.085443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.085508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.085526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.085534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.085541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.085556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.095551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.095622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.095637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.095644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.095651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.095665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.105572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.105640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.105655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.105663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.105670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.105685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.115593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.115666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.115681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.993 [2024-07-15 22:05:11.115688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.993 [2024-07-15 22:05:11.115695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.993 [2024-07-15 22:05:11.115709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-07-15 22:05:11.125596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.993 [2024-07-15 22:05:11.125674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.993 [2024-07-15 22:05:11.125689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.125696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.125703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.125721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.135663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.135730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.135745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.135752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.135759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.135773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.145681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.145753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.145769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.145777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.145783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.145798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.155714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.155794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.155810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.155817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.155823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.155838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.165744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.165816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.165831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.165838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.165845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.165859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.175767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.175841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.175860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.175868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.175874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.175889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.185723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.185791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.185808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.185815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.185822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.185837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.195819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.195890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.195906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.195912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.195919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.195933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.205850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.205931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.205946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.205953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.205960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.205974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.215886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.216049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.216066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.216072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.216085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.216101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-07-15 22:05:11.225974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.994 [2024-07-15 22:05:11.226047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.994 [2024-07-15 22:05:11.226063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.994 [2024-07-15 22:05:11.226072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.994 [2024-07-15 22:05:11.226078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:16.994 [2024-07-15 22:05:11.226092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.994 qpair failed and we were unable to recover it. 00:27:17.254 [2024-07-15 22:05:11.235917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.254 [2024-07-15 22:05:11.235989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.254 [2024-07-15 22:05:11.236005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.254 [2024-07-15 22:05:11.236012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.254 [2024-07-15 22:05:11.236019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.254 [2024-07-15 22:05:11.236034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.254 qpair failed and we were unable to recover it. 00:27:17.254 [2024-07-15 22:05:11.245905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.254 [2024-07-15 22:05:11.245981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.254 [2024-07-15 22:05:11.245995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.254 [2024-07-15 22:05:11.246002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.254 [2024-07-15 22:05:11.246009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.254 [2024-07-15 22:05:11.246024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.254 qpair failed and we were unable to recover it. 00:27:17.254 [2024-07-15 22:05:11.255907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.254 [2024-07-15 22:05:11.255990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.254 [2024-07-15 22:05:11.256006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.254 [2024-07-15 22:05:11.256014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.254 [2024-07-15 22:05:11.256020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.254 [2024-07-15 22:05:11.256035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.254 qpair failed and we were unable to recover it. 00:27:17.254 [2024-07-15 22:05:11.265958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.254 [2024-07-15 22:05:11.266036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.266053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.266060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.266067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.266082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.276033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.276100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.276116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.276124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.276130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.276146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.286095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.286171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.286186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.286194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.286200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.286215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.296131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.296239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.296255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.296262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.296269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.296284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.306094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.306168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.306183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.306190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.306200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.306216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.316096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.316166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.316182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.316189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.316196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.316211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.326211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.326288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.326303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.326311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.326317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.326333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.336164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.336241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.336256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.336264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.336270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.336284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.346191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.346264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.346280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.346288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.346294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.346309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.356236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.356309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.356326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.356334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.356340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.356356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.366325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.366392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.366408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.366415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.366422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.366437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.376361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.376435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.376451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.376458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.376465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.376480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.386405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.386475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.386490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.386497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.386504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.386518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.396453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.396538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.396554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.255 [2024-07-15 22:05:11.396564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.255 [2024-07-15 22:05:11.396570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.255 [2024-07-15 22:05:11.396586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.255 qpair failed and we were unable to recover it. 00:27:17.255 [2024-07-15 22:05:11.406409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.255 [2024-07-15 22:05:11.406476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.255 [2024-07-15 22:05:11.406491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.406498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.406505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.406519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.416413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.416480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.416497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.416504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.416510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.416525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.426462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.426533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.426548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.426557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.426563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.426578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.436460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.436530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.436545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.436552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.436559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.436573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.446602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.446672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.446688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.446695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.446702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.446716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.456630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.456696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.456711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.456718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.456724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.456740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.466618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.466688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.466704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.466711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.466717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.466732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.476595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.476683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.476698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.476705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.476712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.476726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.256 [2024-07-15 22:05:11.486666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.256 [2024-07-15 22:05:11.486735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.256 [2024-07-15 22:05:11.486754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.256 [2024-07-15 22:05:11.486762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.256 [2024-07-15 22:05:11.486768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.256 [2024-07-15 22:05:11.486783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.256 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.496646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.496767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.496785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.496792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.496799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.496814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.506741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.506810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.506828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.506835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.506843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.506858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.516712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.516785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.516801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.516809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.516815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.516830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.526778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.526845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.526860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.526868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.526875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.526893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.536847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.536916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.536932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.536939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.536947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.536962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.546892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.546961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.546978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.546985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.546992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.547008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.556942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.557016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.557033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.557041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.557048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.557065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.566841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.566903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.566920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.566928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.566935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.566951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.576956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.577064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.577084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.577092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.577098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.577113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.586986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.587057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.587072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.587079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.587085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.587100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.597022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.597096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.597111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.597118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.597125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.597140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.607041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.607124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.607138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.607146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.607153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.607167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.617060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.617128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.617144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.617151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.617160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.517 [2024-07-15 22:05:11.617174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.517 qpair failed and we were unable to recover it. 00:27:17.517 [2024-07-15 22:05:11.627094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.517 [2024-07-15 22:05:11.627161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.517 [2024-07-15 22:05:11.627177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.517 [2024-07-15 22:05:11.627184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.517 [2024-07-15 22:05:11.627190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.627205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.637142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.637212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.637231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.637238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.637245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.637259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.647167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.647240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.647256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.647263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.647269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.647285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.657210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.657318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.657335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.657342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.657348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.657363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.667146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.667218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.667237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.667245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.667252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.667267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.677246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.677314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.677329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.677336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.677342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.677356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.687276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.687345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.687361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.687368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.687374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.687389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.697341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.697454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.697470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.697477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.697484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.697500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.707320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.707388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.707402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.707409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.707419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.707434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.717366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.717437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.717452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.717459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.717466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.717481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.727403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.727472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.727486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.727494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.727500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.727515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.737449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.737514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.737529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.737535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.737542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.737557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.518 [2024-07-15 22:05:11.747534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.518 [2024-07-15 22:05:11.747614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.518 [2024-07-15 22:05:11.747629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.518 [2024-07-15 22:05:11.747636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.518 [2024-07-15 22:05:11.747643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.518 [2024-07-15 22:05:11.747657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.518 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.757408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.757484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.757503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.757511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.757518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.757534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.767515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.767586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.767603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.767610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.767617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.767632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.777533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.777604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.777620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.777628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.777635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.777650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.787552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.787620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.787636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.787642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.787649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.787664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.797603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.797682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.797697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.797708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.797715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.797729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.807650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.807719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.807736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.807743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.807750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.807765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.817674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.817741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.817757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.817764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.817771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.779 [2024-07-15 22:05:11.817786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.779 qpair failed and we were unable to recover it. 00:27:17.779 [2024-07-15 22:05:11.827653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.779 [2024-07-15 22:05:11.827726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.779 [2024-07-15 22:05:11.827743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.779 [2024-07-15 22:05:11.827750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.779 [2024-07-15 22:05:11.827757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.827772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.837694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.837766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.837781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.837789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.837795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.837810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.847758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.847828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.847843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.847851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.847858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.847873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.857761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.857823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.857839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.857846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.857852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.857866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.867780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.867847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.867863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.867869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.867875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.867890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.877790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.877861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.877878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.877885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.877891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.877907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.887834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.887917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.887935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.887942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.887948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.887963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.897864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.897945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.897960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.897967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.897974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.897988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.907872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.907960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.907975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.907982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.907988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.908004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.917921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.917992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.918008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.918015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.918021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.918036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.927954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.928027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.928042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.928049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.928056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.928078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.937960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.938030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.938045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.938052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.938059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.938073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.948085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.948156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.948171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.948178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.948185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.948200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.958045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.958122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.958138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.958146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.958153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.780 [2024-07-15 22:05:11.958169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.780 qpair failed and we were unable to recover it. 00:27:17.780 [2024-07-15 22:05:11.968092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.780 [2024-07-15 22:05:11.968165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.780 [2024-07-15 22:05:11.968180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.780 [2024-07-15 22:05:11.968187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.780 [2024-07-15 22:05:11.968193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:11.968208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-07-15 22:05:11.978087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.781 [2024-07-15 22:05:11.978209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.781 [2024-07-15 22:05:11.978233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.781 [2024-07-15 22:05:11.978241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.781 [2024-07-15 22:05:11.978247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:11.978263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-07-15 22:05:11.988103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.781 [2024-07-15 22:05:11.988174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.781 [2024-07-15 22:05:11.988189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.781 [2024-07-15 22:05:11.988197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.781 [2024-07-15 22:05:11.988203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:11.988217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-07-15 22:05:11.998157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.781 [2024-07-15 22:05:11.998237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.781 [2024-07-15 22:05:11.998252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.781 [2024-07-15 22:05:11.998260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.781 [2024-07-15 22:05:11.998266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:11.998281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-07-15 22:05:12.008178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.781 [2024-07-15 22:05:12.008260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.781 [2024-07-15 22:05:12.008276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.781 [2024-07-15 22:05:12.008283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.781 [2024-07-15 22:05:12.008290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:12.008304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:17.781 [2024-07-15 22:05:12.018211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.781 [2024-07-15 22:05:12.018293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.781 [2024-07-15 22:05:12.018308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.781 [2024-07-15 22:05:12.018315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.781 [2024-07-15 22:05:12.018321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:17.781 [2024-07-15 22:05:12.018341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.781 qpair failed and we were unable to recover it. 00:27:18.041 [2024-07-15 22:05:12.028235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.041 [2024-07-15 22:05:12.028307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.041 [2024-07-15 22:05:12.028322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.041 [2024-07-15 22:05:12.028330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.041 [2024-07-15 22:05:12.028336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.041 [2024-07-15 22:05:12.028351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-07-15 22:05:12.038273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.041 [2024-07-15 22:05:12.038343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.041 [2024-07-15 22:05:12.038358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.041 [2024-07-15 22:05:12.038365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.041 [2024-07-15 22:05:12.038371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.041 [2024-07-15 22:05:12.038386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-07-15 22:05:12.048320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.041 [2024-07-15 22:05:12.048396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.041 [2024-07-15 22:05:12.048412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.041 [2024-07-15 22:05:12.048419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.048426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.048441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.058326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.058406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.058421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.058428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.058434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.058449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.068345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.068418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.068434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.068441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.068447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.068462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.078376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.078448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.078463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.078470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.078476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.078490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.088417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.088483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.088498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.088506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.088513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.088527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.098453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.098522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.098538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.098545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.098552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.098566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.108473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.108542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.108557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.108564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.108574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.108589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.118497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.118569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.118586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.118593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.118600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.118615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.128530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.128601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.128616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.128624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.128631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.128645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.138564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.138637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.138653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.138660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.138667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.138681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.148581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.148649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.148663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.148670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.148677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.148691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.158619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.158690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.158705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.158712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.158718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.158733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.168632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.168693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.168708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.168716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.168722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.168737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.178675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.178744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.178759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.178766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.178773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.042 [2024-07-15 22:05:12.178787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-07-15 22:05:12.188707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.042 [2024-07-15 22:05:12.188778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.042 [2024-07-15 22:05:12.188794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.042 [2024-07-15 22:05:12.188801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.042 [2024-07-15 22:05:12.188808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.188823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.198740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.198820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.198837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.198848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.198854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.198870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.208779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.208844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.208859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.208866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.208872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.208887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.218792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.218863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.218878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.218886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.218892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.218907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.228819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.228885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.228900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.228907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.228913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.228928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.238860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.238949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.238963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.238971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.238978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.238993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.248888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.248955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.248971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.248979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.248985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.249001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.258920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.258989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.259004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.259012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.259019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.259033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.268937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.269006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.269021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.269028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.269034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.269048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-07-15 22:05:12.278893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.043 [2024-07-15 22:05:12.279018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.043 [2024-07-15 22:05:12.279039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.043 [2024-07-15 22:05:12.279046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.043 [2024-07-15 22:05:12.279053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.043 [2024-07-15 22:05:12.279069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.303 [2024-07-15 22:05:12.289046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.303 [2024-07-15 22:05:12.289121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.303 [2024-07-15 22:05:12.289136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.303 [2024-07-15 22:05:12.289147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.303 [2024-07-15 22:05:12.289153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.303 [2024-07-15 22:05:12.289168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.303 qpair failed and we were unable to recover it. 00:27:18.303 [2024-07-15 22:05:12.299037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.303 [2024-07-15 22:05:12.299099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.303 [2024-07-15 22:05:12.299114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.303 [2024-07-15 22:05:12.299122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.303 [2024-07-15 22:05:12.299129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.299144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.308985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.309105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.309122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.309129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.309136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.309151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.319094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.319201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.319216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.319223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.319233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.319248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.329117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.329186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.329200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.329207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.329214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.329233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.339143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.339207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.339222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.339233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.339239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.339254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.349170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.349243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.349259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.349266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.349273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.349287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.359161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.359232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.359248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.359255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.359261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.359276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.369164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.369267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.369285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.369292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.369299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.369314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.379267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.379336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.379354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.379362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.379368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.379384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.389285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.389368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.389383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.389390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.389397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.389412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.399315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.399385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.399400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.399407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.399413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.399428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.409343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.409412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.409427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.409435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.409442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.409456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.419389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.419462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.419477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.419485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.419492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.419511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.429396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.429466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.429481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.429488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.429494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.429509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.439432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.304 [2024-07-15 22:05:12.439502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.304 [2024-07-15 22:05:12.439517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.304 [2024-07-15 22:05:12.439524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.304 [2024-07-15 22:05:12.439530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.304 [2024-07-15 22:05:12.439545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.304 qpair failed and we were unable to recover it. 00:27:18.304 [2024-07-15 22:05:12.449465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.449528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.449544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.449551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.449557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.449572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.459484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.459559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.459574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.459582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.459588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.459603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.469510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.469580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.469599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.469606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.469612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.469627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.479567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.479636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.479652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.479659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.479665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.479680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.489562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.489632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.489647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.489655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.489661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.489676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.499632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.499740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.499756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.499763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.499770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.499784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.509612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.509678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.509693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.509700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.509709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.509724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.519675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.519763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.519779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.519786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.519792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.519807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.529601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.529677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.529692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.529700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.529706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.529721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.305 [2024-07-15 22:05:12.539696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.305 [2024-07-15 22:05:12.539769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.305 [2024-07-15 22:05:12.539784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.305 [2024-07-15 22:05:12.539792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.305 [2024-07-15 22:05:12.539798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.305 [2024-07-15 22:05:12.539813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.305 qpair failed and we were unable to recover it. 00:27:18.567 [2024-07-15 22:05:12.549726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.567 [2024-07-15 22:05:12.549794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.567 [2024-07-15 22:05:12.549809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.567 [2024-07-15 22:05:12.549817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.567 [2024-07-15 22:05:12.549823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.567 [2024-07-15 22:05:12.549838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-07-15 22:05:12.559728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.567 [2024-07-15 22:05:12.559813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.567 [2024-07-15 22:05:12.559828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.567 [2024-07-15 22:05:12.559836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.567 [2024-07-15 22:05:12.559842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.567 [2024-07-15 22:05:12.559857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-07-15 22:05:12.569807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.567 [2024-07-15 22:05:12.569875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.567 [2024-07-15 22:05:12.569891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.567 [2024-07-15 22:05:12.569898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.567 [2024-07-15 22:05:12.569904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.567 [2024-07-15 22:05:12.569919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-07-15 22:05:12.579838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.579909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.579924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.579931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.579938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.579952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.589832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.589902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.589917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.589924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.589930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.589944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.599882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.599950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.599965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.599976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.599983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.599997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.609859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.609932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.609948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.609955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.609962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.609978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.619867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.619935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.619950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.619958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.619965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.619980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.629993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.630066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.630083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.630090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.630097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.630113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.639971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.640042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.640058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.640066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.640072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.640088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.650030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.650100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.650116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.650123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.650130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.650144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.659955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.660028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.660044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.660051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.660057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.660072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.670065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.670136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.670151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.670158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.670165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.670180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.680090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.680202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.680218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.680229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.680236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.680252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.690064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.690182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.690198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.690209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.690216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.690235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.700072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.700138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.700153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.700160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.700166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.700181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.710114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.710238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.710254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.568 [2024-07-15 22:05:12.710262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.568 [2024-07-15 22:05:12.710268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.568 [2024-07-15 22:05:12.710284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-07-15 22:05:12.720158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.568 [2024-07-15 22:05:12.720231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.568 [2024-07-15 22:05:12.720246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.720254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.720260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.720275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.730150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.730215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.730234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.730242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.730248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.730264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.740275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.740341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.740355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.740363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.740369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.740384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.750275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.750390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.750407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.750414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.750421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.750437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.760285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.760368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.760383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.760390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.760396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.760411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.770356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.770443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.770459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.770466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.770471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.770487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.780389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.780510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.780530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.780538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.780545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.780560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.790395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.790468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.790483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.790490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.790496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.790511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-07-15 22:05:12.800452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.569 [2024-07-15 22:05:12.800527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.569 [2024-07-15 22:05:12.800542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.569 [2024-07-15 22:05:12.800550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.569 [2024-07-15 22:05:12.800556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.569 [2024-07-15 22:05:12.800570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.810470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.810544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.810560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.810568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.810575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.810591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.820490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.820559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.820576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.820583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.820591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.820611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.830454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.830524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.830540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.830547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.830553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.830567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.840459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.840530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.840545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.840552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.840558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.840573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.850592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.850663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.850680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.850687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.850694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.850710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.860528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.860598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.860612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.860619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.860625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.860640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.870611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.870707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.870724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.870731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.870737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.870753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.880602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.880670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.880685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.880692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.880699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.880713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.890667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.890737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.890752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.890760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.890766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.890781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.900692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.900757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.900772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.900780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.900786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.900801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.910742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.910818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.910833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.910840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.910849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.910864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.920783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.920859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.920875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.920882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.830 [2024-07-15 22:05:12.920888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.830 [2024-07-15 22:05:12.920904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.830 qpair failed and we were unable to recover it. 00:27:18.830 [2024-07-15 22:05:12.930814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.830 [2024-07-15 22:05:12.930884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.830 [2024-07-15 22:05:12.930900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.830 [2024-07-15 22:05:12.930908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.930914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.930929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.940812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.940901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.940918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.940925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.940932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.940947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.950790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.950859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.950874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.950881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.950887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.950902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.960820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.960891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.960906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.960913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.960919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.960933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.970855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.970924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.970939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.970946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.970953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.970967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.980989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.981058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.981074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.981082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.981088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.981104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:12.990910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:12.990983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:12.990999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:12.991006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:12.991013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:12.991027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.001014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.001086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.001101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.001108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.001118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.001133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.011040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.011109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.011125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.011132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.011139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.011153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.020993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.021066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.021081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.021088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.021094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.021109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.031024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.031093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.031108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.031116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.031122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.031137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.041094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.041168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.041184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.041191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.041197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.041212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.051132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.051200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.051215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.051222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.051234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.051249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:18.831 [2024-07-15 22:05:13.061111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.831 [2024-07-15 22:05:13.061181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.831 [2024-07-15 22:05:13.061196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.831 [2024-07-15 22:05:13.061203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.831 [2024-07-15 22:05:13.061209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:18.831 [2024-07-15 22:05:13.061229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.831 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.071207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.071282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.071299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.071306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.071313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.071329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.081219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.081298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.081314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.081321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.081328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.081342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.091318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.091385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.091400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.091410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.091417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.091432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.101214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.101297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.101313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.101320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.101327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.101342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.111309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.111378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.111393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.111400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.111406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.111421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.121363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.121432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.121447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.121454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.121460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.121475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.131390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.131458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.131473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.131480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.131486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.131501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.141423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.141491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.141506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.141513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.141519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.141533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.151442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.151518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.151533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.151541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.151547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.151562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.161514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.161585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.092 [2024-07-15 22:05:13.161601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.092 [2024-07-15 22:05:13.161608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.092 [2024-07-15 22:05:13.161614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.092 [2024-07-15 22:05:13.161629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.092 qpair failed and we were unable to recover it. 00:27:19.092 [2024-07-15 22:05:13.171418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.092 [2024-07-15 22:05:13.171494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.171509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.171516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.171522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.171537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.181524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.181594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.181613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.181621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.181627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.181642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.191556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.191640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.191656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.191663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.191669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.191684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.201574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.201689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.201706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.201713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.201720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.201736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.211609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.211686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.211701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.211709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.211715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.211730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.221647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.221719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.221734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.221741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.221748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.221765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.231665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.231734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.231750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.231758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.231764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.231779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.241708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.241784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.241799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.241806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.241813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.241828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.251648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.251714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.251729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.251737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.251743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.251758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.261712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.261778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.261793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.261800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.261807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.261821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.271789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.271873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.271891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.271898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.271904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.271918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.281821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.281892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.281907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.281915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.281921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.281935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.291854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.291927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.291942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.291950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.291957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.291972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.301884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.301951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.301967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.301974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.301981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.093 [2024-07-15 22:05:13.301995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.093 qpair failed and we were unable to recover it. 00:27:19.093 [2024-07-15 22:05:13.311951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.093 [2024-07-15 22:05:13.312022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.093 [2024-07-15 22:05:13.312037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.093 [2024-07-15 22:05:13.312044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.093 [2024-07-15 22:05:13.312054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.094 [2024-07-15 22:05:13.312069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.094 qpair failed and we were unable to recover it. 00:27:19.094 [2024-07-15 22:05:13.321953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.094 [2024-07-15 22:05:13.322057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.094 [2024-07-15 22:05:13.322072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.094 [2024-07-15 22:05:13.322079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.094 [2024-07-15 22:05:13.322086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.094 [2024-07-15 22:05:13.322101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.094 qpair failed and we were unable to recover it. 00:27:19.094 [2024-07-15 22:05:13.331939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.094 [2024-07-15 22:05:13.332018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.094 [2024-07-15 22:05:13.332034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.094 [2024-07-15 22:05:13.332041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.094 [2024-07-15 22:05:13.332047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.094 [2024-07-15 22:05:13.332062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.094 qpair failed and we were unable to recover it. 00:27:19.352 [2024-07-15 22:05:13.341918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.352 [2024-07-15 22:05:13.341990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.352 [2024-07-15 22:05:13.342006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.352 [2024-07-15 22:05:13.342013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.352 [2024-07-15 22:05:13.342020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.342034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.352011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.352080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.352095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.352103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.352110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.352124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.362038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.362108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.362123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.362131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.362137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.362152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.372061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.372140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.372155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.372162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.372168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.372183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.382096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.382171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.382186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.382193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.382199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.382214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.392111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.392177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.392192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.392199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.392205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.392220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.402153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.402223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.402241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.402248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.402257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.402272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.412182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.412253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.412268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.412276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.412282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.412297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.422207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.422280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.422295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.422303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.422309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.422324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.432239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.432323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.432339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.432345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.432352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.432367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.442278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.442345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.442360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.442368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.442374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.442389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.452302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.452376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.452391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.452399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.452405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.452420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.462330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.462397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.462413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.462420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.462427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.462441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.472360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.472428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.472443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.472451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.472457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.472472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.482405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.482477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.482492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.482499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.482505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.482520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.492408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.492479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.492494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.492508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.492514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.492529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.502469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.502544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.502559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.502566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.502573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.502587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.512489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.512556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.512571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.512578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.512584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.512599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.522505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.522572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.522587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.522594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.522600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.522615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.532544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.532616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.532630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.532638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.532644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.532659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.542529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.353 [2024-07-15 22:05:13.542598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.353 [2024-07-15 22:05:13.542613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.353 [2024-07-15 22:05:13.542620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.353 [2024-07-15 22:05:13.542627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.353 [2024-07-15 22:05:13.542641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.353 qpair failed and we were unable to recover it. 00:27:19.353 [2024-07-15 22:05:13.552598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.354 [2024-07-15 22:05:13.552673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.354 [2024-07-15 22:05:13.552689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.354 [2024-07-15 22:05:13.552696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.354 [2024-07-15 22:05:13.552703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.354 [2024-07-15 22:05:13.552717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.354 qpair failed and we were unable to recover it. 00:27:19.354 [2024-07-15 22:05:13.562623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.354 [2024-07-15 22:05:13.562698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.354 [2024-07-15 22:05:13.562713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.354 [2024-07-15 22:05:13.562720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.354 [2024-07-15 22:05:13.562726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.354 [2024-07-15 22:05:13.562741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.354 qpair failed and we were unable to recover it. 00:27:19.354 [2024-07-15 22:05:13.572664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.354 [2024-07-15 22:05:13.572735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.354 [2024-07-15 22:05:13.572751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.354 [2024-07-15 22:05:13.572759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.354 [2024-07-15 22:05:13.572766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.354 [2024-07-15 22:05:13.572781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.354 qpair failed and we were unable to recover it. 00:27:19.354 [2024-07-15 22:05:13.582686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.354 [2024-07-15 22:05:13.582759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.354 [2024-07-15 22:05:13.582777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.354 [2024-07-15 22:05:13.582784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.354 [2024-07-15 22:05:13.582790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.354 [2024-07-15 22:05:13.582805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.354 qpair failed and we were unable to recover it. 00:27:19.354 [2024-07-15 22:05:13.592737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.354 [2024-07-15 22:05:13.592805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.354 [2024-07-15 22:05:13.592821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.354 [2024-07-15 22:05:13.592827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.354 [2024-07-15 22:05:13.592834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.354 [2024-07-15 22:05:13.592849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.354 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.602737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.602814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.602829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.602837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.602843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.602858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.612780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.612854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.612870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.612877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.612884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.612899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.622806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.622877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.622892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.622900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.622906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.622924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.632828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.632963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.632980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.632987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.632994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.633010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.642859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.642924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.642940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.642947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.642953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.642968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.652898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.652965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.652980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.652988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.652994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.653009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.662925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.662994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.663009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.663016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.663022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.663037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.672950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.673022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.673043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.673051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.673057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.673073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.682977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.683043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.683059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.683066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.683073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.683088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.693002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.693075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.693091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.693099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.693106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.693121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.703035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.703109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.703125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.703133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.703140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.703155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.713059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.713141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.713158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.713166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.713173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.713191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-15 22:05:13.723095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.615 [2024-07-15 22:05:13.723167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.615 [2024-07-15 22:05:13.723182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.615 [2024-07-15 22:05:13.723190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.615 [2024-07-15 22:05:13.723196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.615 [2024-07-15 22:05:13.723211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.733159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.733234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.733250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.733257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.733263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.733278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.743060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.743136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.743151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.743159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.743165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.743179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.753156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.753233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.753248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.753255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.753262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.753277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.763203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.763277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.763293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.763301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.763308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.763323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.773211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.773332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.773349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.773356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.773363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.773380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.783304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.783414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.783430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.783437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.783444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.783460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.793271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.793341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.793356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.793363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.793369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.793383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.803330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.803407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.803422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.803430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.803439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.803455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.813335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.813401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.813417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.813424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.813430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.813445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.823374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.823447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.823462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.823470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.823476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.823491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.833401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.833470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.833485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.833492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.833498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.833513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.843443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.843554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.843570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.843577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.843584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.843600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-15 22:05:13.853482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.616 [2024-07-15 22:05:13.853554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.616 [2024-07-15 22:05:13.853572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.616 [2024-07-15 22:05:13.853579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.616 [2024-07-15 22:05:13.853586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.616 [2024-07-15 22:05:13.853601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.875 [2024-07-15 22:05:13.863469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.875 [2024-07-15 22:05:13.863567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.875 [2024-07-15 22:05:13.863582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.875 [2024-07-15 22:05:13.863589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.875 [2024-07-15 22:05:13.863597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c8000b90 00:27:19.875 [2024-07-15 22:05:13.863612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.875 qpair failed and we were unable to recover it. 00:27:19.875 [2024-07-15 22:05:13.873506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.875 [2024-07-15 22:05:13.873592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.875 [2024-07-15 22:05:13.873619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.875 [2024-07-15 22:05:13.873631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.875 [2024-07-15 22:05:13.873641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c0000b90 00:27:19.875 [2024-07-15 22:05:13.873665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:19.875 qpair failed and we were unable to recover it. 00:27:19.875 [2024-07-15 22:05:13.883543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.883617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.883634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.883642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.883648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c0000b90 00:27:19.876 [2024-07-15 22:05:13.883663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 [2024-07-15 22:05:13.893582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.893655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.893671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.893682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.893689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72c0000b90 00:27:19.876 [2024-07-15 22:05:13.893704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 [2024-07-15 22:05:13.903633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.903720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.903750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.903762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.903772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72d0000b90 00:27:19.876 [2024-07-15 22:05:13.903796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 [2024-07-15 22:05:13.913642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.913714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.913731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.913739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.913745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f72d0000b90 00:27:19.876 [2024-07-15 22:05:13.913761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 [2024-07-15 22:05:13.913911] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:19.876 A controller has encountered a failure and is being reset. 00:27:19.876 [2024-07-15 22:05:13.923734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.923858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.923887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.923899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.923908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e7ffc0 00:27:19.876 [2024-07-15 22:05:13.923931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 [2024-07-15 22:05:13.933718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.876 [2024-07-15 22:05:13.933786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.876 [2024-07-15 22:05:13.933804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.876 [2024-07-15 22:05:13.933812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.876 [2024-07-15 22:05:13.933819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e7ffc0 00:27:19.876 [2024-07-15 22:05:13.933838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.876 qpair failed and we were unable to recover it. 00:27:19.876 Controller properly reset. 00:27:19.876 Initializing NVMe Controllers 00:27:19.876 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:19.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:19.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:19.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:19.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:19.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:19.876 Initialization complete. Launching workers. 00:27:19.876 Starting thread on core 1 00:27:19.876 Starting thread on core 2 00:27:19.876 Starting thread on core 3 00:27:19.876 Starting thread on core 0 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:19.876 00:27:19.876 real 0m10.660s 00:27:19.876 user 0m19.159s 00:27:19.876 sys 0m4.275s 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 ************************************ 00:27:19.876 END TEST nvmf_target_disconnect_tc2 00:27:19.876 ************************************ 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.876 22:05:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.876 rmmod nvme_tcp 00:27:19.876 rmmod nvme_fabrics 00:27:19.876 rmmod nvme_keyring 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3852985 ']' 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3852985 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3852985 ']' 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3852985 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852985 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852985' 00:27:19.876 killing process with pid 3852985 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3852985 00:27:19.876 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3852985 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.135 22:05:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.670 22:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:22.670 00:27:22.670 real 0m18.707s 00:27:22.670 user 0m45.879s 00:27:22.670 sys 0m8.668s 00:27:22.671 22:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.671 22:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 ************************************ 00:27:22.671 END TEST nvmf_target_disconnect 00:27:22.671 ************************************ 00:27:22.671 22:05:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:22.671 22:05:16 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:22.671 22:05:16 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.671 22:05:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 22:05:16 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:22.671 00:27:22.671 real 20m54.175s 00:27:22.671 user 45m4.335s 00:27:22.671 sys 6m18.327s 00:27:22.671 22:05:16 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.671 22:05:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 ************************************ 00:27:22.671 END TEST nvmf_tcp 00:27:22.671 ************************************ 00:27:22.671 22:05:16 -- common/autotest_common.sh@1142 -- # return 0 00:27:22.671 22:05:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:22.671 22:05:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:22.671 22:05:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:22.671 22:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.671 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 ************************************ 00:27:22.671 START TEST spdkcli_nvmf_tcp 00:27:22.671 ************************************ 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:22.671 * Looking for test storage... 00:27:22.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3854514 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3854514 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3854514 ']' 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.671 22:05:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.671 [2024-07-15 22:05:16.697885] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:27:22.671 [2024-07-15 22:05:16.697930] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854514 ] 00:27:22.671 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.671 [2024-07-15 22:05:16.751790] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:22.671 [2024-07-15 22:05:16.826244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.671 [2024-07-15 22:05:16.826248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.607 22:05:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:23.607 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:23.607 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:23.607 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:23.607 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:23.607 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:23.607 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:23.607 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.607 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.607 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:23.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:23.607 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:23.607 ' 00:27:26.160 [2024-07-15 22:05:19.902024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.097 [2024-07-15 22:05:21.077971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:29.017 [2024-07-15 22:05:23.240566] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:30.918 [2024-07-15 22:05:25.098319] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:32.296 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:32.296 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:32.296 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.296 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.296 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:32.296 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:32.296 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:32.554 22:05:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:32.554 22:05:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:32.554 22:05:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.554 22:05:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:32.554 22:05:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.555 22:05:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.555 22:05:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:32.555 22:05:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:32.814 22:05:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.073 22:05:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:33.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:33.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:33.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:33.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:33.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:33.073 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:33.073 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:33.073 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:33.073 ' 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:38.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:38.378 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:38.378 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:38.378 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3854514 ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854514' 00:27:38.378 killing process with pid 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3854514 ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3854514 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3854514 ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3854514 00:27:38.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3854514) - No such process 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3854514 is not found' 00:27:38.378 Process with pid 3854514 is not found 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:38.378 00:27:38.378 real 0m15.793s 00:27:38.378 user 0m32.669s 00:27:38.378 sys 0m0.732s 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.378 22:05:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 ************************************ 00:27:38.378 END TEST spdkcli_nvmf_tcp 00:27:38.378 ************************************ 00:27:38.378 22:05:32 -- common/autotest_common.sh@1142 -- # return 0 00:27:38.378 22:05:32 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:38.378 22:05:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:38.378 22:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.378 22:05:32 -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 ************************************ 00:27:38.378 START TEST nvmf_identify_passthru 00:27:38.378 ************************************ 00:27:38.378 22:05:32 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:38.378 * Looking for test storage... 00:27:38.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.378 22:05:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.378 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.378 22:05:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.378 22:05:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.378 22:05:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.379 22:05:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.379 22:05:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:38.379 22:05:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.379 22:05:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.379 22:05:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.379 22:05:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.379 22:05:32 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.379 22:05:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:43.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:43.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:43.651 Found net devices under 0000:86:00.0: cvl_0_0 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:43.651 Found net devices under 0000:86:00.1: cvl_0_1 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:27:43.651 00:27:43.651 --- 10.0.0.2 ping statistics --- 00:27:43.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.651 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:43.651 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:27:43.651 00:27:43.651 --- 10.0.0.1 ping statistics --- 00:27:43.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.652 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.652 22:05:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:43.652 22:05:37 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:43.652 22:05:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:43.652 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.867 22:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:47.867 22:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:47.867 22:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:47.868 22:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:47.868 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.093 22:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:52.093 22:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:52.093 22:05:45 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.093 22:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3861548 00:27:52.093 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:52.093 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.093 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3861548 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3861548 ']' 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.093 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 [2024-07-15 22:05:46.072900] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:27:52.093 [2024-07-15 22:05:46.072949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.093 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.093 [2024-07-15 22:05:46.129444] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.093 [2024-07-15 22:05:46.210079] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.093 [2024-07-15 22:05:46.210115] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.093 [2024-07-15 22:05:46.210122] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.093 [2024-07-15 22:05:46.210128] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.093 [2024-07-15 22:05:46.210133] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.093 [2024-07-15 22:05:46.210190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.093 [2024-07-15 22:05:46.210289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.093 [2024-07-15 22:05:46.210310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.094 [2024-07-15 22:05:46.210312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:52.662 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.662 INFO: Log level set to 20 00:27:52.662 INFO: Requests: 00:27:52.662 { 00:27:52.662 "jsonrpc": "2.0", 00:27:52.662 "method": "nvmf_set_config", 00:27:52.662 "id": 1, 00:27:52.662 "params": { 00:27:52.662 "admin_cmd_passthru": { 00:27:52.662 "identify_ctrlr": true 00:27:52.662 } 00:27:52.662 } 00:27:52.662 } 00:27:52.662 00:27:52.662 INFO: response: 00:27:52.662 { 00:27:52.662 "jsonrpc": "2.0", 00:27:52.662 "id": 1, 00:27:52.662 "result": true 00:27:52.662 } 00:27:52.662 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.662 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.662 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.662 INFO: Setting log level to 20 00:27:52.662 INFO: Setting log level to 20 00:27:52.662 INFO: Log level set to 20 00:27:52.662 INFO: Log level set to 20 00:27:52.662 INFO: Requests: 00:27:52.662 { 00:27:52.662 "jsonrpc": "2.0", 00:27:52.662 "method": "framework_start_init", 00:27:52.662 "id": 1 00:27:52.662 } 00:27:52.662 00:27:52.662 INFO: Requests: 00:27:52.662 { 00:27:52.662 "jsonrpc": "2.0", 00:27:52.662 "method": "framework_start_init", 00:27:52.662 "id": 1 00:27:52.662 } 00:27:52.662 00:27:52.920 [2024-07-15 22:05:46.981703] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:52.920 INFO: response: 00:27:52.920 { 00:27:52.920 "jsonrpc": "2.0", 00:27:52.920 "id": 1, 00:27:52.920 "result": true 00:27:52.920 } 00:27:52.920 00:27:52.920 INFO: response: 00:27:52.920 { 00:27:52.920 "jsonrpc": "2.0", 00:27:52.920 "id": 1, 00:27:52.920 "result": true 00:27:52.920 } 00:27:52.920 00:27:52.920 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.920 22:05:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.920 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.920 22:05:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.920 INFO: Setting log level to 40 00:27:52.920 INFO: Setting log level to 40 00:27:52.921 INFO: Setting log level to 40 00:27:52.921 [2024-07-15 22:05:46.995185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.921 22:05:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.921 22:05:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:52.921 22:05:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.921 22:05:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:52.921 22:05:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:52.921 22:05:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.921 22:05:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 Nvme0n1 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 [2024-07-15 22:05:49.888192] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 [ 00:27:56.209 { 00:27:56.209 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:56.209 "subtype": "Discovery", 00:27:56.209 "listen_addresses": [], 00:27:56.209 "allow_any_host": true, 00:27:56.209 "hosts": [] 00:27:56.209 }, 00:27:56.209 { 00:27:56.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.209 "subtype": "NVMe", 00:27:56.209 "listen_addresses": [ 00:27:56.209 { 00:27:56.209 "trtype": "TCP", 00:27:56.209 "adrfam": "IPv4", 00:27:56.209 "traddr": "10.0.0.2", 00:27:56.209 "trsvcid": "4420" 00:27:56.209 } 00:27:56.209 ], 00:27:56.209 "allow_any_host": true, 00:27:56.209 "hosts": [], 00:27:56.209 "serial_number": "SPDK00000000000001", 00:27:56.209 "model_number": "SPDK bdev Controller", 00:27:56.209 "max_namespaces": 1, 00:27:56.209 "min_cntlid": 1, 00:27:56.209 "max_cntlid": 65519, 00:27:56.209 "namespaces": [ 00:27:56.209 { 00:27:56.209 "nsid": 1, 00:27:56.209 "bdev_name": "Nvme0n1", 00:27:56.209 "name": "Nvme0n1", 00:27:56.209 "nguid": "CE3E0DD5800F45A59A39B18981348E43", 00:27:56.209 "uuid": "ce3e0dd5-800f-45a5-9a39-b18981348e43" 00:27:56.209 } 00:27:56.209 ] 00:27:56.209 } 00:27:56.209 ] 00:27:56.209 22:05:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:56.209 22:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:56.209 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:56.209 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.209 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.209 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.209 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.209 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:56.210 22:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.210 rmmod nvme_tcp 00:27:56.210 rmmod nvme_fabrics 00:27:56.210 rmmod nvme_keyring 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3861548 ']' 00:27:56.210 22:05:50 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3861548 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3861548 ']' 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3861548 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861548 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861548' 00:27:56.210 killing process with pid 3861548 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3861548 00:27:56.210 22:05:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3861548 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.113 22:05:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.113 22:05:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:58.113 22:05:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.018 22:05:53 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.018 00:28:00.018 real 0m21.533s 00:28:00.018 user 0m29.983s 00:28:00.018 sys 0m4.581s 00:28:00.018 22:05:53 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.018 22:05:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.018 ************************************ 00:28:00.018 END TEST nvmf_identify_passthru 00:28:00.018 ************************************ 00:28:00.018 22:05:53 -- common/autotest_common.sh@1142 -- # return 0 00:28:00.018 22:05:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:00.018 22:05:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.018 22:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.018 22:05:53 -- common/autotest_common.sh@10 -- # set +x 00:28:00.018 ************************************ 00:28:00.018 START TEST nvmf_dif 00:28:00.018 ************************************ 00:28:00.018 22:05:53 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:00.018 * Looking for test storage... 00:28:00.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.018 22:05:54 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.018 22:05:54 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.018 22:05:54 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.018 22:05:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.018 22:05:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.018 22:05:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.018 22:05:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:00.018 22:05:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:00.018 22:05:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.018 22:05:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:00.018 22:05:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.018 22:05:54 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.018 22:05:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:05.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:05.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:05.291 Found net devices under 0000:86:00.0: cvl_0_0 00:28:05.291 22:05:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:05.292 Found net devices under 0000:86:00.1: cvl_0_1 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.292 22:05:58 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:28:05.292 00:28:05.292 --- 10.0.0.2 ping statistics --- 00:28:05.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.292 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:05.292 00:28:05.292 --- 10.0.0.1 ping statistics --- 00:28:05.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.292 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:05.292 22:05:59 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:07.227 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:07.227 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:07.227 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:07.227 22:06:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:07.227 22:06:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3867047 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3867047 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3867047 ']' 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.227 22:06:01 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:07.227 22:06:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:07.227 [2024-07-15 22:06:01.443200] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:28:07.227 [2024-07-15 22:06:01.443252] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.486 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.486 [2024-07-15 22:06:01.501570] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.486 [2024-07-15 22:06:01.580888] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.486 [2024-07-15 22:06:01.580922] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.486 [2024-07-15 22:06:01.580929] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.486 [2024-07-15 22:06:01.580935] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.486 [2024-07-15 22:06:01.580940] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.486 [2024-07-15 22:06:01.580958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:08.055 22:06:02 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:08.055 22:06:02 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.055 22:06:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:08.055 22:06:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:08.055 [2024-07-15 22:06:02.284422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.055 22:06:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.055 22:06:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 ************************************ 00:28:08.315 START TEST fio_dif_1_default 00:28:08.315 ************************************ 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 bdev_null0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:08.315 [2024-07-15 22:06:02.348685] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.315 { 00:28:08.315 "params": { 00:28:08.315 "name": "Nvme$subsystem", 00:28:08.315 "trtype": "$TEST_TRANSPORT", 00:28:08.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.315 "adrfam": "ipv4", 00:28:08.315 "trsvcid": "$NVMF_PORT", 00:28:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.315 "hdgst": ${hdgst:-false}, 00:28:08.315 "ddgst": ${ddgst:-false} 00:28:08.315 }, 00:28:08.315 "method": "bdev_nvme_attach_controller" 00:28:08.315 } 00:28:08.315 EOF 00:28:08.315 )") 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.315 "params": { 00:28:08.315 "name": "Nvme0", 00:28:08.315 "trtype": "tcp", 00:28:08.315 "traddr": "10.0.0.2", 00:28:08.315 "adrfam": "ipv4", 00:28:08.315 "trsvcid": "4420", 00:28:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.315 "hdgst": false, 00:28:08.315 "ddgst": false 00:28:08.315 }, 00:28:08.315 "method": "bdev_nvme_attach_controller" 00:28:08.315 }' 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:08.315 22:06:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.573 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:08.573 fio-3.35 00:28:08.573 Starting 1 thread 00:28:08.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.863 00:28:20.863 filename0: (groupid=0, jobs=1): err= 0: pid=3867499: Mon Jul 15 22:06:13 2024 00:28:20.863 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:28:20.863 slat (nsec): min=4260, max=25501, avg=6368.14, stdev=1328.44 00:28:20.863 clat (usec): min=623, max=46429, avg=21040.58, stdev=20337.07 00:28:20.863 lat (usec): min=629, max=46442, avg=21046.95, stdev=20337.09 00:28:20.863 clat percentiles (usec): 00:28:20.863 | 1.00th=[ 635], 5.00th=[ 644], 10.00th=[ 644], 20.00th=[ 652], 00:28:20.863 | 30.00th=[ 660], 40.00th=[ 668], 50.00th=[41157], 60.00th=[41157], 00:28:20.863 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:20.863 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:28:20.863 | 99.99th=[46400] 00:28:20.863 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=17.13, samples=19 00:28:20.863 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:28:20.863 lat (usec) : 750=46.79%, 1000=3.11% 00:28:20.863 lat (msec) : 50=50.11% 00:28:20.863 cpu : usr=94.73%, sys=5.00%, ctx=13, majf=0, minf=210 00:28:20.863 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:20.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.863 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:20.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:20.863 00:28:20.863 Run status group 0 (all jobs): 00:28:20.863 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10003-10003msec 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.863 00:28:20.863 real 0m11.044s 00:28:20.863 user 0m15.943s 00:28:20.863 sys 0m0.772s 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 ************************************ 00:28:20.863 END TEST fio_dif_1_default 00:28:20.863 ************************************ 00:28:20.863 22:06:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:20.863 22:06:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:20.863 22:06:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:20.863 22:06:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 ************************************ 00:28:20.863 START TEST fio_dif_1_multi_subsystems 00:28:20.863 ************************************ 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 bdev_null0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.863 [2024-07-15 22:06:13.458353] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.863 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.864 bdev_null1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.864 { 00:28:20.864 "params": { 00:28:20.864 "name": "Nvme$subsystem", 00:28:20.864 "trtype": "$TEST_TRANSPORT", 00:28:20.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.864 "adrfam": "ipv4", 00:28:20.864 "trsvcid": "$NVMF_PORT", 00:28:20.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.864 "hdgst": ${hdgst:-false}, 00:28:20.864 "ddgst": ${ddgst:-false} 00:28:20.864 }, 00:28:20.864 "method": "bdev_nvme_attach_controller" 00:28:20.864 } 00:28:20.864 EOF 00:28:20.864 )") 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.864 { 00:28:20.864 "params": { 00:28:20.864 "name": "Nvme$subsystem", 00:28:20.864 "trtype": "$TEST_TRANSPORT", 00:28:20.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.864 "adrfam": "ipv4", 00:28:20.864 "trsvcid": "$NVMF_PORT", 00:28:20.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.864 "hdgst": ${hdgst:-false}, 00:28:20.864 "ddgst": ${ddgst:-false} 00:28:20.864 }, 00:28:20.864 "method": "bdev_nvme_attach_controller" 00:28:20.864 } 00:28:20.864 EOF 00:28:20.864 )") 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.864 "params": { 00:28:20.864 "name": "Nvme0", 00:28:20.864 "trtype": "tcp", 00:28:20.864 "traddr": "10.0.0.2", 00:28:20.864 "adrfam": "ipv4", 00:28:20.864 "trsvcid": "4420", 00:28:20.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:20.864 "hdgst": false, 00:28:20.864 "ddgst": false 00:28:20.864 }, 00:28:20.864 "method": "bdev_nvme_attach_controller" 00:28:20.864 },{ 00:28:20.864 "params": { 00:28:20.864 "name": "Nvme1", 00:28:20.864 "trtype": "tcp", 00:28:20.864 "traddr": "10.0.0.2", 00:28:20.864 "adrfam": "ipv4", 00:28:20.864 "trsvcid": "4420", 00:28:20.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.864 "hdgst": false, 00:28:20.864 "ddgst": false 00:28:20.864 }, 00:28:20.864 "method": "bdev_nvme_attach_controller" 00:28:20.864 }' 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:20.864 22:06:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.864 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:20.864 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:20.864 fio-3.35 00:28:20.864 Starting 2 threads 00:28:20.864 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.838 00:28:30.838 filename0: (groupid=0, jobs=1): err= 0: pid=3869863: Mon Jul 15 22:06:24 2024 00:28:30.838 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10027msec) 00:28:30.838 slat (nsec): min=6100, max=31748, avg=7875.47, stdev=2584.21 00:28:30.838 clat (usec): min=40847, max=42753, avg=41752.93, stdev=422.03 00:28:30.838 lat (usec): min=40854, max=42766, avg=41760.80, stdev=422.11 00:28:30.838 clat percentiles (usec): 00:28:30.838 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:30.838 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:30.838 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:30.838 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:30.838 | 99.99th=[42730] 00:28:30.838 bw ( KiB/s): min= 352, max= 416, per=49.87%, avg=382.40, stdev=12.61, samples=20 00:28:30.838 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:28:30.838 lat (msec) : 50=100.00% 00:28:30.838 cpu : usr=97.75%, sys=1.96%, ctx=12, majf=0, minf=132 00:28:30.838 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.838 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.838 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:30.838 filename1: (groupid=0, jobs=1): err= 0: pid=3869864: Mon Jul 15 22:06:24 2024 00:28:30.838 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10025msec) 00:28:30.838 slat (nsec): min=6218, max=63644, avg=8042.18, stdev=3303.41 00:28:30.838 clat (usec): min=40865, max=42257, avg=41743.55, stdev=418.36 00:28:30.838 lat (usec): min=40872, max=42289, avg=41751.59, stdev=418.28 00:28:30.838 clat percentiles (usec): 00:28:30.838 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:30.838 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:30.838 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:30.838 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:30.838 | 99.99th=[42206] 00:28:30.838 bw ( KiB/s): min= 352, max= 416, per=49.87%, avg=382.40, stdev=16.33, samples=20 00:28:30.838 iops : min= 88, max= 104, avg=95.60, stdev= 4.08, samples=20 00:28:30.838 lat (msec) : 50=100.00% 00:28:30.838 cpu : usr=97.66%, sys=2.06%, ctx=17, majf=0, minf=154 00:28:30.838 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.838 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.838 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:30.838 00:28:30.838 Run status group 0 (all jobs): 00:28:30.838 READ: bw=766KiB/s (784kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=7680KiB (7864kB), run=10025-10027msec 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:30.838 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 00:28:30.839 real 0m11.527s 00:28:30.839 user 0m27.014s 00:28:30.839 sys 0m0.703s 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 ************************************ 00:28:30.839 END TEST fio_dif_1_multi_subsystems 00:28:30.839 ************************************ 00:28:30.839 22:06:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:30.839 22:06:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:30.839 22:06:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:30.839 22:06:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.839 22:06:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 ************************************ 00:28:30.839 START TEST fio_dif_rand_params 00:28:30.839 ************************************ 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 bdev_null0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:30.839 [2024-07-15 22:06:25.058308] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:30.839 { 00:28:30.839 "params": { 00:28:30.839 "name": "Nvme$subsystem", 00:28:30.839 "trtype": "$TEST_TRANSPORT", 00:28:30.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.839 "adrfam": "ipv4", 00:28:30.839 "trsvcid": "$NVMF_PORT", 00:28:30.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.839 "hdgst": ${hdgst:-false}, 00:28:30.839 "ddgst": ${ddgst:-false} 00:28:30.839 }, 00:28:30.839 "method": "bdev_nvme_attach_controller" 00:28:30.839 } 00:28:30.839 EOF 00:28:30.839 )") 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:30.839 22:06:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:30.839 "params": { 00:28:30.839 "name": "Nvme0", 00:28:30.839 "trtype": "tcp", 00:28:30.839 "traddr": "10.0.0.2", 00:28:30.839 "adrfam": "ipv4", 00:28:30.839 "trsvcid": "4420", 00:28:30.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:30.839 "hdgst": false, 00:28:30.839 "ddgst": false 00:28:30.839 }, 00:28:30.839 "method": "bdev_nvme_attach_controller" 00:28:30.839 }' 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:31.122 22:06:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.384 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:31.384 ... 00:28:31.385 fio-3.35 00:28:31.385 Starting 3 threads 00:28:31.385 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.937 00:28:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=3871823: Mon Jul 15 22:06:30 2024 00:28:37.937 read: IOPS=256, BW=32.1MiB/s (33.7MB/s)(161MiB/5004msec) 00:28:37.937 slat (nsec): min=4421, max=50574, avg=9417.50, stdev=2955.69 00:28:37.937 clat (usec): min=3895, max=51639, avg=11659.05, stdev=13296.40 00:28:37.937 lat (usec): min=3901, max=51651, avg=11668.47, stdev=13296.59 00:28:37.937 clat percentiles (usec): 00:28:37.937 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5145], 00:28:37.937 | 30.00th=[ 6063], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7570], 00:28:37.937 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[46924], 95.00th=[48497], 00:28:37.937 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:28:37.937 | 99.99th=[51643] 00:28:37.937 bw ( KiB/s): min=23808, max=43863, per=34.01%, avg=32866.00, stdev=7002.52, samples=10 00:28:37.937 iops : min= 186, max= 342, avg=256.60, stdev=54.59, samples=10 00:28:37.937 lat (msec) : 4=0.16%, 10=83.75%, 20=4.67%, 50=10.42%, 100=1.01% 00:28:37.937 cpu : usr=94.70%, sys=4.92%, ctx=12, majf=0, minf=127 00:28:37.937 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 issued rwts: total=1286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=3871824: Mon Jul 15 22:06:30 2024 00:28:37.937 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(153MiB/5046msec) 00:28:37.937 slat (nsec): min=3036, max=21989, avg=9214.61, stdev=2668.68 00:28:37.937 clat (usec): min=3607, max=89588, avg=12357.30, stdev=14575.20 00:28:37.937 lat (usec): min=3614, max=89600, avg=12366.51, stdev=14575.45 00:28:37.937 clat percentiles (usec): 00:28:37.937 | 1.00th=[ 4178], 5.00th=[ 4555], 10.00th=[ 4752], 20.00th=[ 5276], 00:28:37.937 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7373], 00:28:37.937 | 70.00th=[ 8455], 80.00th=[ 9634], 90.00th=[47449], 95.00th=[49021], 00:28:37.937 | 99.00th=[50594], 99.50th=[51119], 99.90th=[87557], 99.95th=[89654], 00:28:37.937 | 99.99th=[89654] 00:28:37.937 bw ( KiB/s): min=15104, max=45056, per=32.25%, avg=31172.00, stdev=10575.34, samples=10 00:28:37.937 iops : min= 118, max= 352, avg=243.40, stdev=82.47, samples=10 00:28:37.937 lat (msec) : 4=0.41%, 10=81.89%, 20=4.75%, 50=10.08%, 100=2.87% 00:28:37.937 cpu : usr=95.36%, sys=4.28%, ctx=10, majf=0, minf=62 00:28:37.937 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=3871825: Mon Jul 15 22:06:30 2024 00:28:37.937 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(163MiB/5044msec) 00:28:37.937 slat (nsec): min=4847, max=27937, avg=9266.95, stdev=2676.48 00:28:37.937 clat (usec): min=3995, max=91038, avg=11591.00, stdev=13949.77 00:28:37.937 lat (usec): min=4002, max=91048, avg=11600.27, stdev=13950.09 00:28:37.937 clat percentiles (usec): 00:28:37.937 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 4948], 00:28:37.937 | 30.00th=[ 5473], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7308], 00:28:37.937 | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[46400], 95.00th=[48497], 00:28:37.937 | 99.00th=[51119], 99.50th=[52167], 99.90th=[90702], 99.95th=[90702], 00:28:37.937 | 99.99th=[90702] 00:28:37.937 bw ( KiB/s): min=18944, max=52224, per=34.46%, avg=33305.60, stdev=9455.80, samples=10 00:28:37.937 iops : min= 148, max= 408, avg=260.20, stdev=73.87, samples=10 00:28:37.937 lat (msec) : 4=0.08%, 10=81.98%, 20=6.83%, 50=8.59%, 100=2.53% 00:28:37.937 cpu : usr=94.73%, sys=4.94%, ctx=12, majf=0, minf=82 00:28:37.937 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.937 issued rwts: total=1304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.937 00:28:37.937 Run status group 0 (all jobs): 00:28:37.937 READ: bw=94.4MiB/s (99.0MB/s), 30.2MiB/s-32.3MiB/s (31.7MB/s-33.9MB/s), io=476MiB (499MB), run=5004-5046msec 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:37.937 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 bdev_null0 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 [2024-07-15 22:06:31.243681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 bdev_null1 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 bdev_null2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.938 { 00:28:37.938 "params": { 00:28:37.938 "name": "Nvme$subsystem", 00:28:37.938 "trtype": "$TEST_TRANSPORT", 00:28:37.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.938 "adrfam": "ipv4", 00:28:37.938 "trsvcid": "$NVMF_PORT", 00:28:37.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.938 "hdgst": ${hdgst:-false}, 00:28:37.938 "ddgst": ${ddgst:-false} 00:28:37.938 }, 00:28:37.938 "method": "bdev_nvme_attach_controller" 00:28:37.938 } 00:28:37.938 EOF 00:28:37.938 )") 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.938 { 00:28:37.938 "params": { 00:28:37.938 "name": "Nvme$subsystem", 00:28:37.938 "trtype": "$TEST_TRANSPORT", 00:28:37.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.938 "adrfam": "ipv4", 00:28:37.938 "trsvcid": "$NVMF_PORT", 00:28:37.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.938 "hdgst": ${hdgst:-false}, 00:28:37.938 "ddgst": ${ddgst:-false} 00:28:37.938 }, 00:28:37.938 "method": "bdev_nvme_attach_controller" 00:28:37.938 } 00:28:37.938 EOF 00:28:37.938 )") 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.938 { 00:28:37.938 "params": { 00:28:37.938 "name": "Nvme$subsystem", 00:28:37.938 "trtype": "$TEST_TRANSPORT", 00:28:37.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.938 "adrfam": "ipv4", 00:28:37.938 "trsvcid": "$NVMF_PORT", 00:28:37.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.938 "hdgst": ${hdgst:-false}, 00:28:37.938 "ddgst": ${ddgst:-false} 00:28:37.938 }, 00:28:37.938 "method": "bdev_nvme_attach_controller" 00:28:37.938 } 00:28:37.938 EOF 00:28:37.938 )") 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:37.938 22:06:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:37.938 "params": { 00:28:37.938 "name": "Nvme0", 00:28:37.938 "trtype": "tcp", 00:28:37.938 "traddr": "10.0.0.2", 00:28:37.938 "adrfam": "ipv4", 00:28:37.938 "trsvcid": "4420", 00:28:37.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.938 "hdgst": false, 00:28:37.938 "ddgst": false 00:28:37.938 }, 00:28:37.938 "method": "bdev_nvme_attach_controller" 00:28:37.938 },{ 00:28:37.938 "params": { 00:28:37.938 "name": "Nvme1", 00:28:37.938 "trtype": "tcp", 00:28:37.938 "traddr": "10.0.0.2", 00:28:37.938 "adrfam": "ipv4", 00:28:37.939 "trsvcid": "4420", 00:28:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.939 "hdgst": false, 00:28:37.939 "ddgst": false 00:28:37.939 }, 00:28:37.939 "method": "bdev_nvme_attach_controller" 00:28:37.939 },{ 00:28:37.939 "params": { 00:28:37.939 "name": "Nvme2", 00:28:37.939 "trtype": "tcp", 00:28:37.939 "traddr": "10.0.0.2", 00:28:37.939 "adrfam": "ipv4", 00:28:37.939 "trsvcid": "4420", 00:28:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.939 "hdgst": false, 00:28:37.939 "ddgst": false 00:28:37.939 }, 00:28:37.939 "method": "bdev_nvme_attach_controller" 00:28:37.939 }' 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.939 22:06:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.939 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:37.939 ... 00:28:37.939 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:37.939 ... 00:28:37.939 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:37.939 ... 00:28:37.939 fio-3.35 00:28:37.939 Starting 24 threads 00:28:37.939 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.121 00:28:50.121 filename0: (groupid=0, jobs=1): err= 0: pid=3872887: Mon Jul 15 22:06:42 2024 00:28:50.121 read: IOPS=579, BW=2318KiB/s (2374kB/s)(22.7MiB/10022msec) 00:28:50.121 slat (nsec): min=6887, max=82733, avg=32379.43, stdev=17734.25 00:28:50.121 clat (usec): min=4439, max=36644, avg=27331.55, stdev=2406.85 00:28:50.121 lat (usec): min=4450, max=36679, avg=27363.93, stdev=2408.40 00:28:50.121 clat percentiles (usec): 00:28:50.121 | 1.00th=[ 8160], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.121 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.121 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.121 | 99.00th=[28705], 99.50th=[28967], 99.90th=[32375], 99.95th=[32375], 00:28:50.121 | 99.99th=[36439] 00:28:50.121 bw ( KiB/s): min= 2176, max= 2688, per=4.20%, avg=2316.80, stdev=100.87, samples=20 00:28:50.121 iops : min= 544, max= 672, avg=579.20, stdev=25.22, samples=20 00:28:50.121 lat (msec) : 10=1.10%, 20=0.59%, 50=98.31% 00:28:50.121 cpu : usr=98.86%, sys=0.76%, ctx=14, majf=0, minf=51 00:28:50.121 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:50.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.121 filename0: (groupid=0, jobs=1): err= 0: pid=3872888: Mon Jul 15 22:06:42 2024 00:28:50.121 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10002msec) 00:28:50.121 slat (nsec): min=7730, max=81375, avg=32038.98, stdev=15422.07 00:28:50.121 clat (usec): min=10778, max=49744, avg=27561.88, stdev=1635.07 00:28:50.121 lat (usec): min=10811, max=49784, avg=27593.92, stdev=1635.37 00:28:50.121 clat percentiles (usec): 00:28:50.121 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.121 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.121 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.121 | 99.00th=[28443], 99.50th=[28705], 99.90th=[49546], 99.95th=[49546], 00:28:50.121 | 99.99th=[49546] 00:28:50.121 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2283.79, stdev=47.95, samples=19 00:28:50.121 iops : min= 544, max= 576, avg=570.95, stdev=11.99, samples=19 00:28:50.121 lat (msec) : 20=0.56%, 50=99.44% 00:28:50.121 cpu : usr=98.65%, sys=0.83%, ctx=61, majf=0, minf=51 00:28:50.121 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.121 filename0: (groupid=0, jobs=1): err= 0: pid=3872889: Mon Jul 15 22:06:42 2024 00:28:50.121 read: IOPS=574, BW=2299KiB/s (2354kB/s)(22.5MiB/10019msec) 00:28:50.121 slat (nsec): min=6957, max=84467, avg=34134.28, stdev=18856.15 00:28:50.121 clat (usec): min=15244, max=38276, avg=27542.22, stdev=1366.37 00:28:50.121 lat (usec): min=15258, max=38325, avg=27576.35, stdev=1367.62 00:28:50.121 clat percentiles (usec): 00:28:50.121 | 1.00th=[20579], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:50.121 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.121 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.121 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36439], 99.95th=[36963], 00:28:50.121 | 99.99th=[38536] 00:28:50.121 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2296.80, stdev=28.66, samples=20 00:28:50.121 iops : min= 544, max= 576, avg=574.20, stdev= 7.16, samples=20 00:28:50.121 lat (msec) : 20=0.66%, 50=99.34% 00:28:50.121 cpu : usr=98.97%, sys=0.65%, ctx=14, majf=0, minf=96 00:28:50.121 IO depths : 1=5.1%, 2=11.1%, 4=24.8%, 8=51.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:50.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.121 issued rwts: total=5758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.121 filename0: (groupid=0, jobs=1): err= 0: pid=3872890: Mon Jul 15 22:06:42 2024 00:28:50.121 read: IOPS=580, BW=2323KiB/s (2379kB/s)(22.7MiB/10004msec) 00:28:50.122 slat (nsec): min=6868, max=87096, avg=22522.10, stdev=16290.55 00:28:50.122 clat (usec): min=10276, max=51309, avg=27369.69, stdev=2650.22 00:28:50.122 lat (usec): min=10293, max=51327, avg=27392.21, stdev=2650.37 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[16712], 5.00th=[23725], 10.00th=[27132], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.122 | 99.00th=[32900], 99.50th=[39060], 99.90th=[51119], 99.95th=[51119], 00:28:50.122 | 99.99th=[51119] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2528, per=4.21%, avg=2318.20, stdev=77.47, samples=20 00:28:50.122 iops : min= 544, max= 632, avg=579.55, stdev=19.37, samples=20 00:28:50.122 lat (msec) : 20=2.96%, 50=96.76%, 100=0.28% 00:28:50.122 cpu : usr=98.84%, sys=0.77%, ctx=20, majf=0, minf=59 00:28:50.122 IO depths : 1=3.1%, 2=6.3%, 4=13.3%, 8=65.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename0: (groupid=0, jobs=1): err= 0: pid=3872891: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10004msec) 00:28:50.122 slat (nsec): min=6938, max=61433, avg=19724.65, stdev=7650.60 00:28:50.122 clat (usec): min=12059, max=43708, avg=27752.57, stdev=1468.93 00:28:50.122 lat (usec): min=12074, max=43722, avg=27772.29, stdev=1469.69 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[23725], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:50.122 | 99.00th=[34866], 99.50th=[36963], 99.90th=[43779], 99.95th=[43779], 00:28:50.122 | 99.99th=[43779] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2287.20, stdev=40.03, samples=20 00:28:50.122 iops : min= 544, max= 576, avg=571.80, stdev=10.01, samples=20 00:28:50.122 lat (msec) : 20=0.59%, 50=99.41% 00:28:50.122 cpu : usr=98.80%, sys=0.81%, ctx=14, majf=0, minf=55 00:28:50.122 IO depths : 1=5.8%, 2=11.8%, 4=24.5%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename0: (groupid=0, jobs=1): err= 0: pid=3872892: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10008msec) 00:28:50.122 slat (nsec): min=7066, max=78115, avg=22643.09, stdev=12949.45 00:28:50.122 clat (usec): min=17905, max=36861, avg=27676.14, stdev=806.66 00:28:50.122 lat (usec): min=17926, max=36891, avg=27698.78, stdev=806.13 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:50.122 | 99.00th=[28705], 99.50th=[28967], 99.90th=[36963], 99.95th=[36963], 00:28:50.122 | 99.99th=[36963] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.122 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.122 lat (msec) : 20=0.28%, 50=99.72% 00:28:50.122 cpu : usr=98.81%, sys=0.80%, ctx=12, majf=0, minf=64 00:28:50.122 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename0: (groupid=0, jobs=1): err= 0: pid=3872893: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10010msec) 00:28:50.122 slat (nsec): min=5897, max=89196, avg=28776.92, stdev=17473.23 00:28:50.122 clat (usec): min=10585, max=62224, avg=27798.19, stdev=3335.51 00:28:50.122 lat (usec): min=10615, max=62240, avg=27826.97, stdev=3335.08 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[15926], 5.00th=[25297], 10.00th=[27132], 20.00th=[27395], 00:28:50.122 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[29754], 00:28:50.122 | 99.00th=[39584], 99.50th=[44827], 99.90th=[62129], 99.95th=[62129], 00:28:50.122 | 99.99th=[62129] 00:28:50.122 bw ( KiB/s): min= 2032, max= 2412, per=4.13%, avg=2278.20, stdev=88.60, samples=20 00:28:50.122 iops : min= 508, max= 603, avg=569.55, stdev=22.15, samples=20 00:28:50.122 lat (msec) : 20=2.08%, 50=97.64%, 100=0.28% 00:28:50.122 cpu : usr=98.65%, sys=0.96%, ctx=15, majf=0, minf=71 00:28:50.122 IO depths : 1=4.4%, 2=9.2%, 4=20.5%, 8=57.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename0: (groupid=0, jobs=1): err= 0: pid=3872894: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10004msec) 00:28:50.122 slat (nsec): min=6986, max=85508, avg=23714.14, stdev=13034.81 00:28:50.122 clat (usec): min=18618, max=35078, avg=27679.55, stdev=795.89 00:28:50.122 lat (usec): min=18634, max=35113, avg=27703.27, stdev=794.64 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[26084], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.122 | 99.00th=[28443], 99.50th=[32900], 99.90th=[34866], 99.95th=[34866], 00:28:50.122 | 99.99th=[34866] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.122 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.122 lat (msec) : 20=0.31%, 50=99.69% 00:28:50.122 cpu : usr=98.58%, sys=1.03%, ctx=17, majf=0, minf=61 00:28:50.122 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename1: (groupid=0, jobs=1): err= 0: pid=3872895: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10008msec) 00:28:50.122 slat (nsec): min=6883, max=77969, avg=24011.92, stdev=12917.54 00:28:50.122 clat (usec): min=11909, max=39262, avg=27657.93, stdev=1016.24 00:28:50.122 lat (usec): min=11924, max=39277, avg=27681.94, stdev=1016.19 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.122 | 99.00th=[28967], 99.50th=[29230], 99.90th=[36963], 99.95th=[39060], 00:28:50.122 | 99.99th=[39060] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.122 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.122 lat (msec) : 20=0.35%, 50=99.65% 00:28:50.122 cpu : usr=98.95%, sys=0.66%, ctx=17, majf=0, minf=58 00:28:50.122 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename1: (groupid=0, jobs=1): err= 0: pid=3872896: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10010msec) 00:28:50.122 slat (nsec): min=5461, max=77989, avg=23735.38, stdev=12575.12 00:28:50.122 clat (usec): min=10652, max=80960, avg=27732.51, stdev=3438.94 00:28:50.122 lat (usec): min=10678, max=80976, avg=27756.25, stdev=3438.89 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[15008], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:50.122 | 99.00th=[40633], 99.50th=[43779], 99.90th=[62653], 99.95th=[81265], 00:28:50.122 | 99.99th=[81265] 00:28:50.122 bw ( KiB/s): min= 2048, max= 2412, per=4.15%, avg=2286.20, stdev=68.85, samples=20 00:28:50.122 iops : min= 512, max= 603, avg=571.55, stdev=17.21, samples=20 00:28:50.122 lat (msec) : 20=2.46%, 50=97.26%, 100=0.28% 00:28:50.122 cpu : usr=98.72%, sys=0.89%, ctx=15, majf=0, minf=70 00:28:50.122 IO depths : 1=4.5%, 2=9.8%, 4=23.0%, 8=54.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:28:50.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.122 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.122 filename1: (groupid=0, jobs=1): err= 0: pid=3872897: Mon Jul 15 22:06:42 2024 00:28:50.122 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10008msec) 00:28:50.122 slat (nsec): min=6839, max=79820, avg=17453.70, stdev=9012.99 00:28:50.122 clat (usec): min=13649, max=42415, avg=27559.82, stdev=2395.08 00:28:50.122 lat (usec): min=13659, max=42446, avg=27577.27, stdev=2396.30 00:28:50.122 clat percentiles (usec): 00:28:50.122 | 1.00th=[18220], 5.00th=[23200], 10.00th=[27395], 20.00th=[27395], 00:28:50.122 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:50.122 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:50.122 | 99.00th=[35914], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:28:50.122 | 99.99th=[42206] 00:28:50.122 bw ( KiB/s): min= 2176, max= 2544, per=4.18%, avg=2306.40, stdev=70.47, samples=20 00:28:50.123 iops : min= 544, max= 636, avg=576.60, stdev=17.62, samples=20 00:28:50.123 lat (msec) : 20=2.56%, 50=97.44% 00:28:50.123 cpu : usr=98.72%, sys=0.88%, ctx=15, majf=0, minf=79 00:28:50.123 IO depths : 1=2.7%, 2=7.3%, 4=20.5%, 8=59.3%, 16=10.1%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename1: (groupid=0, jobs=1): err= 0: pid=3872898: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10003msec) 00:28:50.123 slat (nsec): min=6921, max=79059, avg=27743.11, stdev=16087.37 00:28:50.123 clat (usec): min=10743, max=54892, avg=27645.43, stdev=2231.36 00:28:50.123 lat (usec): min=10766, max=54935, avg=27673.17, stdev=2230.99 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[21365], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:50.123 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:28:50.123 | 99.00th=[33162], 99.50th=[40109], 99.90th=[54789], 99.95th=[54789], 00:28:50.123 | 99.99th=[54789] 00:28:50.123 bw ( KiB/s): min= 2052, max= 2388, per=4.15%, avg=2288.60, stdev=65.88, samples=20 00:28:50.123 iops : min= 513, max= 597, avg=572.15, stdev=16.47, samples=20 00:28:50.123 lat (msec) : 20=0.84%, 50=98.88%, 100=0.28% 00:28:50.123 cpu : usr=98.82%, sys=0.78%, ctx=11, majf=0, minf=65 00:28:50.123 IO depths : 1=5.8%, 2=11.6%, 4=23.4%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename1: (groupid=0, jobs=1): err= 0: pid=3872899: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10005msec) 00:28:50.123 slat (nsec): min=3307, max=82525, avg=22950.21, stdev=17714.32 00:28:50.123 clat (usec): min=4286, max=35912, avg=27452.11, stdev=2392.37 00:28:50.123 lat (usec): min=4302, max=35964, avg=27475.06, stdev=2392.66 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[13042], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.123 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:50.123 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:50.123 | 99.00th=[28705], 99.50th=[31851], 99.90th=[35390], 99.95th=[35914], 00:28:50.123 | 99.99th=[35914] 00:28:50.123 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2310.40, stdev=97.17, samples=20 00:28:50.123 iops : min= 544, max= 672, avg=577.60, stdev=24.29, samples=20 00:28:50.123 lat (msec) : 10=0.83%, 20=0.66%, 50=98.52% 00:28:50.123 cpu : usr=98.75%, sys=0.86%, ctx=16, majf=0, minf=81 00:28:50.123 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename1: (groupid=0, jobs=1): err= 0: pid=3872900: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10004msec) 00:28:50.123 slat (nsec): min=4283, max=70795, avg=29079.97, stdev=15874.04 00:28:50.123 clat (usec): min=4374, max=32473, avg=27377.80, stdev=2270.11 00:28:50.123 lat (usec): min=4385, max=32505, avg=27406.88, stdev=2271.58 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[12649], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.123 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.123 | 99.00th=[28443], 99.50th=[28443], 99.90th=[32375], 99.95th=[32375], 00:28:50.123 | 99.99th=[32375] 00:28:50.123 bw ( KiB/s): min= 2176, max= 2693, per=4.19%, avg=2310.65, stdev=98.20, samples=20 00:28:50.123 iops : min= 544, max= 673, avg=577.65, stdev=24.50, samples=20 00:28:50.123 lat (msec) : 10=0.83%, 20=0.83%, 50=98.34% 00:28:50.123 cpu : usr=98.98%, sys=0.65%, ctx=16, majf=0, minf=101 00:28:50.123 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename1: (groupid=0, jobs=1): err= 0: pid=3872901: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10008msec) 00:28:50.123 slat (nsec): min=7168, max=59613, avg=18813.50, stdev=9132.96 00:28:50.123 clat (usec): min=17956, max=36804, avg=27715.56, stdev=795.80 00:28:50.123 lat (usec): min=17972, max=36836, avg=27734.38, stdev=795.48 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:28:50.123 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:50.123 | 99.00th=[28705], 99.50th=[29230], 99.90th=[36439], 99.95th=[36963], 00:28:50.123 | 99.99th=[36963] 00:28:50.123 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.123 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.123 lat (msec) : 20=0.28%, 50=99.72% 00:28:50.123 cpu : usr=98.82%, sys=0.77%, ctx=36, majf=0, minf=81 00:28:50.123 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename1: (groupid=0, jobs=1): err= 0: pid=3872902: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10004msec) 00:28:50.123 slat (nsec): min=7053, max=86739, avg=31392.98, stdev=16545.38 00:28:50.123 clat (usec): min=18652, max=33207, avg=27618.99, stdev=646.12 00:28:50.123 lat (usec): min=18667, max=33228, avg=27650.38, stdev=643.94 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[26346], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.123 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.123 | 99.00th=[28443], 99.50th=[28705], 99.90th=[33162], 99.95th=[33162], 00:28:50.123 | 99.99th=[33162] 00:28:50.123 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.123 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.123 lat (msec) : 20=0.28%, 50=99.72% 00:28:50.123 cpu : usr=98.81%, sys=0.82%, ctx=20, majf=0, minf=51 00:28:50.123 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename2: (groupid=0, jobs=1): err= 0: pid=3872903: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10005msec) 00:28:50.123 slat (nsec): min=6628, max=84628, avg=24152.98, stdev=17677.68 00:28:50.123 clat (usec): min=4591, max=56649, avg=27994.87, stdev=2695.05 00:28:50.123 lat (usec): min=4599, max=56666, avg=28019.02, stdev=2694.09 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[21365], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:50.123 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:50.123 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[30802], 00:28:50.123 | 99.00th=[39060], 99.50th=[42206], 99.90th=[56361], 99.95th=[56886], 00:28:50.123 | 99.99th=[56886] 00:28:50.123 bw ( KiB/s): min= 2048, max= 2324, per=4.12%, avg=2269.20, stdev=69.89, samples=20 00:28:50.123 iops : min= 512, max= 581, avg=567.30, stdev=17.47, samples=20 00:28:50.123 lat (msec) : 10=0.04%, 20=0.53%, 50=99.16%, 100=0.28% 00:28:50.123 cpu : usr=98.68%, sys=0.93%, ctx=13, majf=0, minf=56 00:28:50.123 IO depths : 1=0.1%, 2=2.4%, 4=10.7%, 8=71.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=91.5%, 8=5.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename2: (groupid=0, jobs=1): err= 0: pid=3872904: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10008msec) 00:28:50.123 slat (nsec): min=6788, max=78978, avg=30714.04, stdev=17741.51 00:28:50.123 clat (usec): min=18387, max=66763, avg=27708.02, stdev=2847.79 00:28:50.123 lat (usec): min=18407, max=66785, avg=27738.73, stdev=2847.10 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[19006], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:50.123 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:50.123 | 99.00th=[36963], 99.50th=[42206], 99.90th=[59507], 99.95th=[66847], 00:28:50.123 | 99.99th=[66847] 00:28:50.123 bw ( KiB/s): min= 2048, max= 2352, per=4.14%, avg=2281.26, stdev=68.37, samples=19 00:28:50.123 iops : min= 512, max= 588, avg=570.32, stdev=17.09, samples=19 00:28:50.123 lat (msec) : 20=2.31%, 50=97.41%, 100=0.28% 00:28:50.123 cpu : usr=98.84%, sys=0.77%, ctx=54, majf=0, minf=57 00:28:50.123 IO depths : 1=4.4%, 2=9.7%, 4=23.0%, 8=54.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:28:50.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.123 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.123 filename2: (groupid=0, jobs=1): err= 0: pid=3872905: Mon Jul 15 22:06:42 2024 00:28:50.123 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10004msec) 00:28:50.123 slat (nsec): min=7347, max=89839, avg=29279.15, stdev=15556.50 00:28:50.123 clat (usec): min=18670, max=33239, avg=27637.47, stdev=643.31 00:28:50.123 lat (usec): min=18686, max=33259, avg=27666.75, stdev=641.30 00:28:50.123 clat percentiles (usec): 00:28:50.123 | 1.00th=[26346], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.123 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.123 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.124 | 99.00th=[28443], 99.50th=[28443], 99.90th=[33162], 99.95th=[33162], 00:28:50.124 | 99.99th=[33162] 00:28:50.124 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:28:50.124 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:28:50.124 lat (msec) : 20=0.28%, 50=99.72% 00:28:50.124 cpu : usr=98.96%, sys=0.66%, ctx=14, majf=0, minf=62 00:28:50.124 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 filename2: (groupid=0, jobs=1): err= 0: pid=3872906: Mon Jul 15 22:06:42 2024 00:28:50.124 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10007msec) 00:28:50.124 slat (nsec): min=5936, max=87665, avg=33017.12, stdev=17742.86 00:28:50.124 clat (usec): min=10805, max=54657, avg=27578.78, stdev=1871.44 00:28:50.124 lat (usec): min=10836, max=54674, avg=27611.80, stdev=1870.81 00:28:50.124 clat percentiles (usec): 00:28:50.124 | 1.00th=[26346], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.124 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.124 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.124 | 99.00th=[28443], 99.50th=[28705], 99.90th=[54789], 99.95th=[54789], 00:28:50.124 | 99.99th=[54789] 00:28:50.124 bw ( KiB/s): min= 2048, max= 2427, per=4.16%, avg=2290.95, stdev=70.20, samples=20 00:28:50.124 iops : min= 512, max= 606, avg=572.70, stdev=17.48, samples=20 00:28:50.124 lat (msec) : 20=0.66%, 50=99.06%, 100=0.28% 00:28:50.124 cpu : usr=98.72%, sys=0.90%, ctx=11, majf=0, minf=57 00:28:50.124 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 filename2: (groupid=0, jobs=1): err= 0: pid=3872907: Mon Jul 15 22:06:42 2024 00:28:50.124 read: IOPS=574, BW=2299KiB/s (2354kB/s)(22.5MiB/10004msec) 00:28:50.124 slat (nsec): min=6780, max=87859, avg=32356.24, stdev=18416.19 00:28:50.124 clat (usec): min=7118, max=51220, avg=27520.89, stdev=2029.18 00:28:50.124 lat (usec): min=7130, max=51238, avg=27553.25, stdev=2030.05 00:28:50.124 clat percentiles (usec): 00:28:50.124 | 1.00th=[25822], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.124 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:50.124 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.124 | 99.00th=[28443], 99.50th=[28967], 99.90th=[51119], 99.95th=[51119], 00:28:50.124 | 99.99th=[51119] 00:28:50.124 bw ( KiB/s): min= 2176, max= 2484, per=4.16%, avg=2294.20, stdev=64.33, samples=20 00:28:50.124 iops : min= 544, max= 621, avg=573.55, stdev=16.08, samples=20 00:28:50.124 lat (msec) : 10=0.10%, 20=0.73%, 50=98.89%, 100=0.28% 00:28:50.124 cpu : usr=98.73%, sys=0.89%, ctx=15, majf=0, minf=61 00:28:50.124 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 filename2: (groupid=0, jobs=1): err= 0: pid=3872908: Mon Jul 15 22:06:42 2024 00:28:50.124 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10004msec) 00:28:50.124 slat (nsec): min=4287, max=77080, avg=27640.89, stdev=16186.05 00:28:50.124 clat (usec): min=4497, max=35440, avg=27403.07, stdev=2288.55 00:28:50.124 lat (usec): min=4511, max=35474, avg=27430.71, stdev=2289.51 00:28:50.124 clat percentiles (usec): 00:28:50.124 | 1.00th=[13304], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:50.124 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.124 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.124 | 99.00th=[28443], 99.50th=[28705], 99.90th=[32375], 99.95th=[32375], 00:28:50.124 | 99.99th=[35390] 00:28:50.124 bw ( KiB/s): min= 2176, max= 2693, per=4.19%, avg=2310.65, stdev=98.20, samples=20 00:28:50.124 iops : min= 544, max= 673, avg=577.65, stdev=24.50, samples=20 00:28:50.124 lat (msec) : 10=0.83%, 20=0.86%, 50=98.31% 00:28:50.124 cpu : usr=98.60%, sys=0.95%, ctx=47, majf=0, minf=84 00:28:50.124 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 filename2: (groupid=0, jobs=1): err= 0: pid=3872909: Mon Jul 15 22:06:42 2024 00:28:50.124 read: IOPS=575, BW=2304KiB/s (2359kB/s)(22.5MiB/10004msec) 00:28:50.124 slat (usec): min=4, max=275, avg=27.84, stdev=16.52 00:28:50.124 clat (usec): min=13777, max=37076, avg=27580.25, stdev=1260.74 00:28:50.124 lat (usec): min=13789, max=37246, avg=27608.09, stdev=1260.10 00:28:50.124 clat percentiles (usec): 00:28:50.124 | 1.00th=[20579], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:50.124 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.124 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:50.124 | 99.00th=[28705], 99.50th=[31851], 99.90th=[36439], 99.95th=[36963], 00:28:50.124 | 99.99th=[36963] 00:28:50.124 bw ( KiB/s): min= 2176, max= 2320, per=4.17%, avg=2298.40, stdev=29.03, samples=20 00:28:50.124 iops : min= 544, max= 580, avg=574.60, stdev= 7.26, samples=20 00:28:50.124 lat (msec) : 20=0.87%, 50=99.13% 00:28:50.124 cpu : usr=98.95%, sys=0.67%, ctx=11, majf=0, minf=60 00:28:50.124 IO depths : 1=5.6%, 2=11.2%, 4=22.6%, 8=53.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 filename2: (groupid=0, jobs=1): err= 0: pid=3872910: Mon Jul 15 22:06:42 2024 00:28:50.124 read: IOPS=575, BW=2300KiB/s (2355kB/s)(22.5MiB/10003msec) 00:28:50.124 slat (nsec): min=6872, max=88142, avg=24326.42, stdev=15566.72 00:28:50.124 clat (usec): min=10617, max=78084, avg=27644.83, stdev=3272.39 00:28:50.124 lat (usec): min=10624, max=78100, avg=27669.16, stdev=3271.95 00:28:50.124 clat percentiles (usec): 00:28:50.124 | 1.00th=[20055], 5.00th=[23725], 10.00th=[27132], 20.00th=[27395], 00:28:50.124 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:50.124 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[30540], 00:28:50.124 | 99.00th=[35390], 99.50th=[38011], 99.90th=[68682], 99.95th=[78119], 00:28:50.124 | 99.99th=[78119] 00:28:50.124 bw ( KiB/s): min= 2016, max= 2368, per=4.15%, avg=2288.00, stdev=72.54, samples=19 00:28:50.124 iops : min= 504, max= 592, avg=572.00, stdev=18.14, samples=19 00:28:50.124 lat (msec) : 20=0.97%, 50=98.75%, 100=0.28% 00:28:50.124 cpu : usr=98.90%, sys=0.70%, ctx=10, majf=0, minf=58 00:28:50.124 IO depths : 1=4.0%, 2=8.0%, 4=16.9%, 8=61.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:28:50.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 complete : 0=0.0%, 4=92.2%, 8=3.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.124 issued rwts: total=5752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.124 00:28:50.124 Run status group 0 (all jobs): 00:28:50.124 READ: bw=53.8MiB/s (56.4MB/s), 2274KiB/s-2323KiB/s (2329kB/s-2379kB/s), io=539MiB (566MB), run=10002-10022msec 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:50.124 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 bdev_null0 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 [2024-07-15 22:06:42.945838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 bdev_null1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:50.125 { 00:28:50.125 "params": { 00:28:50.125 "name": "Nvme$subsystem", 00:28:50.125 "trtype": "$TEST_TRANSPORT", 00:28:50.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.125 "adrfam": "ipv4", 00:28:50.125 "trsvcid": "$NVMF_PORT", 00:28:50.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.125 "hdgst": ${hdgst:-false}, 00:28:50.125 "ddgst": ${ddgst:-false} 00:28:50.125 }, 00:28:50.125 "method": "bdev_nvme_attach_controller" 00:28:50.125 } 00:28:50.125 EOF 00:28:50.125 )") 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:50.125 { 00:28:50.125 "params": { 00:28:50.125 "name": "Nvme$subsystem", 00:28:50.125 "trtype": "$TEST_TRANSPORT", 00:28:50.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.125 "adrfam": "ipv4", 00:28:50.125 "trsvcid": "$NVMF_PORT", 00:28:50.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.125 "hdgst": ${hdgst:-false}, 00:28:50.125 "ddgst": ${ddgst:-false} 00:28:50.125 }, 00:28:50.125 "method": "bdev_nvme_attach_controller" 00:28:50.125 } 00:28:50.125 EOF 00:28:50.125 )") 00:28:50.125 22:06:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:50.125 "params": { 00:28:50.125 "name": "Nvme0", 00:28:50.125 "trtype": "tcp", 00:28:50.125 "traddr": "10.0.0.2", 00:28:50.125 "adrfam": "ipv4", 00:28:50.125 "trsvcid": "4420", 00:28:50.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.125 "hdgst": false, 00:28:50.125 "ddgst": false 00:28:50.125 }, 00:28:50.125 "method": "bdev_nvme_attach_controller" 00:28:50.125 },{ 00:28:50.125 "params": { 00:28:50.125 "name": "Nvme1", 00:28:50.125 "trtype": "tcp", 00:28:50.125 "traddr": "10.0.0.2", 00:28:50.125 "adrfam": "ipv4", 00:28:50.125 "trsvcid": "4420", 00:28:50.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.125 "hdgst": false, 00:28:50.125 "ddgst": false 00:28:50.125 }, 00:28:50.125 "method": "bdev_nvme_attach_controller" 00:28:50.125 }' 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.125 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:50.126 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.126 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.126 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.126 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.126 22:06:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.126 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:50.126 ... 00:28:50.126 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:50.126 ... 00:28:50.126 fio-3.35 00:28:50.126 Starting 4 threads 00:28:50.126 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.425 00:28:55.425 filename0: (groupid=0, jobs=1): err= 0: pid=3874847: Mon Jul 15 22:06:49 2024 00:28:55.425 read: IOPS=2569, BW=20.1MiB/s (21.1MB/s)(101MiB/5042msec) 00:28:55.425 slat (nsec): min=6115, max=67366, avg=12555.70, stdev=8523.91 00:28:55.425 clat (usec): min=913, max=44740, avg=3060.81, stdev=1274.39 00:28:55.425 lat (usec): min=924, max=44758, avg=3073.37, stdev=1274.07 00:28:55.425 clat percentiles (usec): 00:28:55.425 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2769], 00:28:55.425 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:55.425 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3589], 95.00th=[ 4113], 00:28:55.425 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 5538], 99.95th=[44827], 00:28:55.425 | 99.99th=[44827] 00:28:55.425 bw ( KiB/s): min=19552, max=22176, per=25.03%, avg=20724.80, stdev=756.13, samples=10 00:28:55.425 iops : min= 2444, max= 2772, avg=2590.60, stdev=94.52, samples=10 00:28:55.425 lat (usec) : 1000=0.01% 00:28:55.425 lat (msec) : 2=0.42%, 4=93.05%, 10=6.44%, 50=0.08% 00:28:55.425 cpu : usr=96.95%, sys=2.70%, ctx=8, majf=0, minf=49 00:28:55.425 IO depths : 1=0.1%, 2=1.8%, 4=70.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 issued rwts: total=12956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.425 filename0: (groupid=0, jobs=1): err= 0: pid=3874848: Mon Jul 15 22:06:49 2024 00:28:55.425 read: IOPS=2642, BW=20.6MiB/s (21.6MB/s)(103MiB/5002msec) 00:28:55.425 slat (nsec): min=6102, max=71244, avg=11494.23, stdev=7102.11 00:28:55.425 clat (usec): min=1268, max=42677, avg=2993.06, stdev=1069.64 00:28:55.425 lat (usec): min=1275, max=42702, avg=3004.55, stdev=1069.62 00:28:55.425 clat percentiles (usec): 00:28:55.425 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:28:55.425 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:55.425 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 4015], 00:28:55.425 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[42730], 00:28:55.425 | 99.99th=[42730] 00:28:55.425 bw ( KiB/s): min=19632, max=21824, per=25.56%, avg=21159.11, stdev=615.95, samples=9 00:28:55.425 iops : min= 2454, max= 2728, avg=2644.89, stdev=76.99, samples=9 00:28:55.425 lat (msec) : 2=0.89%, 4=93.99%, 10=5.07%, 50=0.06% 00:28:55.425 cpu : usr=96.62%, sys=3.06%, ctx=6, majf=0, minf=25 00:28:55.425 IO depths : 1=0.4%, 2=3.3%, 4=68.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 issued rwts: total=13218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.425 filename1: (groupid=0, jobs=1): err= 0: pid=3874849: Mon Jul 15 22:06:49 2024 00:28:55.425 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:28:55.425 slat (nsec): min=6214, max=62176, avg=14215.85, stdev=7313.04 00:28:55.425 clat (usec): min=1454, max=43490, avg=3079.55, stdev=1130.02 00:28:55.425 lat (usec): min=1477, max=43516, avg=3093.76, stdev=1129.76 00:28:55.425 clat percentiles (usec): 00:28:55.425 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2737], 00:28:55.425 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:55.425 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3851], 95.00th=[ 4293], 00:28:55.425 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[43254], 00:28:55.425 | 99.99th=[43254] 00:28:55.425 bw ( KiB/s): min=18864, max=21296, per=24.77%, avg=20505.60, stdev=725.50, samples=10 00:28:55.425 iops : min= 2358, max= 2662, avg=2563.20, stdev=90.69, samples=10 00:28:55.425 lat (msec) : 2=0.51%, 4=91.17%, 10=8.26%, 50=0.06% 00:28:55.425 cpu : usr=96.82%, sys=2.74%, ctx=28, majf=0, minf=30 00:28:55.425 IO depths : 1=0.1%, 2=3.4%, 4=68.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.425 issued rwts: total=12821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.425 filename1: (groupid=0, jobs=1): err= 0: pid=3874850: Mon Jul 15 22:06:49 2024 00:28:55.425 read: IOPS=2636, BW=20.6MiB/s (21.6MB/s)(103MiB/5002msec) 00:28:55.425 slat (nsec): min=6126, max=67434, avg=13139.71, stdev=9108.15 00:28:55.425 clat (usec): min=1067, max=5577, avg=2994.28, stdev=508.53 00:28:55.425 lat (usec): min=1073, max=5590, avg=3007.42, stdev=508.28 00:28:55.425 clat percentiles (usec): 00:28:55.425 | 1.00th=[ 1237], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:28:55.425 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:55.425 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3556], 95.00th=[ 4113], 00:28:55.425 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5014], 99.95th=[ 5473], 00:28:55.425 | 99.99th=[ 5538] 00:28:55.425 bw ( KiB/s): min=20416, max=23072, per=25.48%, avg=21096.89, stdev=849.59, samples=9 00:28:55.426 iops : min= 2552, max= 2884, avg=2637.11, stdev=106.20, samples=9 00:28:55.426 lat (msec) : 2=2.15%, 4=91.33%, 10=6.53% 00:28:55.426 cpu : usr=97.14%, sys=2.50%, ctx=5, majf=0, minf=66 00:28:55.426 IO depths : 1=0.1%, 2=3.6%, 4=69.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.426 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.426 issued rwts: total=13188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.426 00:28:55.426 Run status group 0 (all jobs): 00:28:55.426 READ: bw=80.9MiB/s (84.8MB/s), 20.0MiB/s-20.6MiB/s (21.0MB/s-21.6MB/s), io=408MiB (427MB), run=5002-5042msec 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 00:28:55.426 real 0m24.259s 00:28:55.426 user 4m52.009s 00:28:55.426 sys 0m4.157s 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 ************************************ 00:28:55.426 END TEST fio_dif_rand_params 00:28:55.426 ************************************ 00:28:55.426 22:06:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:55.426 22:06:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:55.426 22:06:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:55.426 22:06:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 ************************************ 00:28:55.426 START TEST fio_dif_digest 00:28:55.426 ************************************ 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 bdev_null0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.426 [2024-07-15 22:06:49.393157] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.426 { 00:28:55.426 "params": { 00:28:55.426 "name": "Nvme$subsystem", 00:28:55.426 "trtype": "$TEST_TRANSPORT", 00:28:55.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.426 "adrfam": "ipv4", 00:28:55.426 "trsvcid": "$NVMF_PORT", 00:28:55.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.426 "hdgst": ${hdgst:-false}, 00:28:55.426 "ddgst": ${ddgst:-false} 00:28:55.426 }, 00:28:55.426 "method": "bdev_nvme_attach_controller" 00:28:55.426 } 00:28:55.426 EOF 00:28:55.426 )") 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.426 "params": { 00:28:55.426 "name": "Nvme0", 00:28:55.426 "trtype": "tcp", 00:28:55.426 "traddr": "10.0.0.2", 00:28:55.426 "adrfam": "ipv4", 00:28:55.426 "trsvcid": "4420", 00:28:55.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.426 "hdgst": true, 00:28:55.426 "ddgst": true 00:28:55.426 }, 00:28:55.426 "method": "bdev_nvme_attach_controller" 00:28:55.426 }' 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.426 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:55.427 22:06:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.693 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:55.693 ... 00:28:55.693 fio-3.35 00:28:55.693 Starting 3 threads 00:28:55.693 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.879 00:29:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=3876045: Mon Jul 15 22:07:00 2024 00:29:07.879 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(357MiB/10046msec) 00:29:07.879 slat (nsec): min=6634, max=58612, avg=16390.11, stdev=5817.05 00:29:07.879 clat (usec): min=6426, max=49218, avg=10521.92, stdev=1312.71 00:29:07.879 lat (usec): min=6435, max=49228, avg=10538.31, stdev=1312.42 00:29:07.879 clat percentiles (usec): 00:29:07.879 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:29:07.879 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:29:07.879 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:29:07.879 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13304], 99.95th=[46924], 00:29:07.879 | 99.99th=[49021] 00:29:07.879 bw ( KiB/s): min=35072, max=37632, per=35.02%, avg=36518.40, stdev=671.04, samples=20 00:29:07.879 iops : min= 274, max= 294, avg=285.30, stdev= 5.24, samples=20 00:29:07.879 lat (msec) : 10=24.20%, 20=75.73%, 50=0.07% 00:29:07.879 cpu : usr=95.46%, sys=4.08%, ctx=21, majf=0, minf=166 00:29:07.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 issued rwts: total=2855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=3876046: Mon Jul 15 22:07:00 2024 00:29:07.879 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(344MiB/10045msec) 00:29:07.879 slat (usec): min=6, max=159, avg=16.05, stdev= 8.10 00:29:07.879 clat (usec): min=6495, max=54697, avg=10904.37, stdev=2447.35 00:29:07.879 lat (usec): min=6505, max=54729, avg=10920.42, stdev=2447.55 00:29:07.879 clat percentiles (usec): 00:29:07.879 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:29:07.879 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:29:07.879 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:29:07.879 | 99.00th=[13173], 99.50th=[13566], 99.90th=[54789], 99.95th=[54789], 00:29:07.879 | 99.99th=[54789] 00:29:07.879 bw ( KiB/s): min=33024, max=37632, per=33.79%, avg=35238.40, stdev=1665.86, samples=20 00:29:07.879 iops : min= 258, max= 294, avg=275.30, stdev=13.01, samples=20 00:29:07.879 lat (msec) : 10=18.62%, 20=81.09%, 50=0.04%, 100=0.25% 00:29:07.879 cpu : usr=95.33%, sys=4.36%, ctx=21, majf=0, minf=190 00:29:07.879 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 issued rwts: total=2755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=3876048: Mon Jul 15 22:07:00 2024 00:29:07.879 read: IOPS=257, BW=32.2MiB/s (33.7MB/s)(322MiB/10006msec) 00:29:07.879 slat (nsec): min=6577, max=47230, avg=15498.72, stdev=7544.47 00:29:07.879 clat (usec): min=5288, max=55935, avg=11643.33, stdev=2712.03 00:29:07.879 lat (usec): min=5296, max=55961, avg=11658.83, stdev=2712.10 00:29:07.879 clat percentiles (usec): 00:29:07.879 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:29:07.879 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:29:07.879 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:29:07.879 | 99.00th=[14091], 99.50th=[14746], 99.90th=[54264], 99.95th=[54789], 00:29:07.879 | 99.99th=[55837] 00:29:07.879 bw ( KiB/s): min=29696, max=35328, per=31.48%, avg=32825.26, stdev=1706.79, samples=19 00:29:07.879 iops : min= 232, max= 276, avg=256.42, stdev=13.34, samples=19 00:29:07.879 lat (msec) : 10=5.40%, 20=94.25%, 100=0.35% 00:29:07.879 cpu : usr=95.89%, sys=3.70%, ctx=16, majf=0, minf=174 00:29:07.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.879 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.880 00:29:07.880 Run status group 0 (all jobs): 00:29:07.880 READ: bw=102MiB/s (107MB/s), 32.2MiB/s-35.5MiB/s (33.7MB/s-37.2MB/s), io=1023MiB (1073MB), run=10006-10046msec 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.880 00:29:07.880 real 0m11.043s 00:29:07.880 user 0m35.081s 00:29:07.880 sys 0m1.480s 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.880 22:07:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.880 ************************************ 00:29:07.880 END TEST fio_dif_digest 00:29:07.880 ************************************ 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:07.880 22:07:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:07.880 22:07:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.880 rmmod nvme_tcp 00:29:07.880 rmmod nvme_fabrics 00:29:07.880 rmmod nvme_keyring 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3867047 ']' 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3867047 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3867047 ']' 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3867047 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3867047 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3867047' 00:29:07.880 killing process with pid 3867047 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3867047 00:29:07.880 22:07:00 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3867047 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:07.880 22:07:00 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:08.813 Waiting for block devices as requested 00:29:08.813 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:09.072 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:09.072 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:09.072 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:09.331 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:09.331 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:09.331 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:09.331 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:09.588 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:09.588 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:09.588 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:09.588 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:09.846 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:09.847 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:09.847 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:09.847 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.105 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.105 22:07:04 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.105 22:07:04 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.105 22:07:04 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.105 22:07:04 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.105 22:07:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.105 22:07:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:10.105 22:07:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.636 22:07:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.636 00:29:12.636 real 1m12.325s 00:29:12.636 user 7m9.401s 00:29:12.636 sys 0m17.276s 00:29:12.636 22:07:06 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.636 22:07:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:12.636 ************************************ 00:29:12.636 END TEST nvmf_dif 00:29:12.636 ************************************ 00:29:12.636 22:07:06 -- common/autotest_common.sh@1142 -- # return 0 00:29:12.636 22:07:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:12.636 22:07:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:12.636 22:07:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.636 22:07:06 -- common/autotest_common.sh@10 -- # set +x 00:29:12.636 ************************************ 00:29:12.636 START TEST nvmf_abort_qd_sizes 00:29:12.636 ************************************ 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:12.636 * Looking for test storage... 00:29:12.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.636 22:07:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:17.903 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:17.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:17.904 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:17.904 Found net devices under 0000:86:00.0: cvl_0_0 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:17.904 Found net devices under 0000:86:00.1: cvl_0_1 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:29:17.904 00:29:17.904 --- 10.0.0.2 ping statistics --- 00:29:17.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.904 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:29:17.904 00:29:17.904 --- 10.0.0.1 ping statistics --- 00:29:17.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.904 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:17.904 22:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:19.803 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:19.803 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:20.062 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:20.062 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:20.062 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:20.062 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:20.062 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:20.630 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:20.888 22:07:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3883669 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3883669 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3883669 ']' 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.888 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:20.888 [2024-07-15 22:07:15.064839] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:29:20.888 [2024-07-15 22:07:15.064882] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.888 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.888 [2024-07-15 22:07:15.122451] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.147 [2024-07-15 22:07:15.208498] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.147 [2024-07-15 22:07:15.208532] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.147 [2024-07-15 22:07:15.208539] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.147 [2024-07-15 22:07:15.208546] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.147 [2024-07-15 22:07:15.208551] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.147 [2024-07-15 22:07:15.208591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.147 [2024-07-15 22:07:15.208607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.147 [2024-07-15 22:07:15.208624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.147 [2024-07-15 22:07:15.208625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.716 22:07:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:21.978 ************************************ 00:29:21.978 START TEST spdk_target_abort 00:29:21.978 ************************************ 00:29:21.978 22:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:21.978 22:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:21.978 22:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:21.978 22:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.978 22:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.279 spdk_targetn1 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.279 [2024-07-15 22:07:18.796022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.279 [2024-07-15 22:07:18.833073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.279 22:07:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.279 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.815 Initializing NVMe Controllers 00:29:27.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:27.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:27.815 Initialization complete. Launching workers. 00:29:27.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13597, failed: 0 00:29:27.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1485, failed to submit 12112 00:29:27.815 success 765, unsuccess 720, failed 0 00:29:27.815 22:07:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:27.815 22:07:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:27.815 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.010 Initializing NVMe Controllers 00:29:32.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:32.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:32.010 Initialization complete. Launching workers. 00:29:32.010 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8725, failed: 0 00:29:32.010 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1281, failed to submit 7444 00:29:32.010 success 298, unsuccess 983, failed 0 00:29:32.010 22:07:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:32.010 22:07:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:32.010 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.543 Initializing NVMe Controllers 00:29:34.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:34.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:34.543 Initialization complete. Launching workers. 00:29:34.543 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37783, failed: 0 00:29:34.543 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2827, failed to submit 34956 00:29:34.543 success 576, unsuccess 2251, failed 0 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.543 22:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3883669 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3883669 ']' 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3883669 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883669 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883669' 00:29:35.918 killing process with pid 3883669 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3883669 00:29:35.918 22:07:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3883669 00:29:35.918 00:29:35.918 real 0m14.086s 00:29:35.918 user 0m56.162s 00:29:35.918 sys 0m2.275s 00:29:35.918 22:07:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:35.918 22:07:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:35.918 ************************************ 00:29:35.918 END TEST spdk_target_abort 00:29:35.918 ************************************ 00:29:35.918 22:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:35.918 22:07:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:35.918 22:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.918 22:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.918 22:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.918 ************************************ 00:29:35.918 START TEST kernel_target_abort 00:29:35.918 ************************************ 00:29:35.918 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:35.918 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:35.918 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:35.919 22:07:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:38.474 Waiting for block devices as requested 00:29:38.474 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:38.474 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:38.474 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:38.474 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:38.474 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:38.474 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:38.474 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:38.731 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:38.731 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:38.731 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:38.731 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:38.988 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:38.988 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:38.988 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:39.244 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:39.244 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:39.244 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:39.244 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:39.502 No valid GPT data, bailing 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:39.502 00:29:39.502 Discovery Log Number of Records 2, Generation counter 2 00:29:39.502 =====Discovery Log Entry 0====== 00:29:39.502 trtype: tcp 00:29:39.502 adrfam: ipv4 00:29:39.502 subtype: current discovery subsystem 00:29:39.502 treq: not specified, sq flow control disable supported 00:29:39.502 portid: 1 00:29:39.502 trsvcid: 4420 00:29:39.502 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:39.502 traddr: 10.0.0.1 00:29:39.502 eflags: none 00:29:39.502 sectype: none 00:29:39.502 =====Discovery Log Entry 1====== 00:29:39.502 trtype: tcp 00:29:39.502 adrfam: ipv4 00:29:39.502 subtype: nvme subsystem 00:29:39.502 treq: not specified, sq flow control disable supported 00:29:39.502 portid: 1 00:29:39.502 trsvcid: 4420 00:29:39.502 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:39.502 traddr: 10.0.0.1 00:29:39.502 eflags: none 00:29:39.502 sectype: none 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:39.502 22:07:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:39.502 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.786 Initializing NVMe Controllers 00:29:42.786 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:42.786 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:42.786 Initialization complete. Launching workers. 00:29:42.786 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76141, failed: 0 00:29:42.786 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 76141, failed to submit 0 00:29:42.786 success 0, unsuccess 76141, failed 0 00:29:42.786 22:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:42.786 22:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:42.786 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.108 Initializing NVMe Controllers 00:29:46.108 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:46.108 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:46.108 Initialization complete. Launching workers. 00:29:46.108 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126712, failed: 0 00:29:46.108 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31730, failed to submit 94982 00:29:46.108 success 0, unsuccess 31730, failed 0 00:29:46.108 22:07:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:46.108 22:07:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:46.108 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.395 Initializing NVMe Controllers 00:29:49.395 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:49.395 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:49.395 Initialization complete. Launching workers. 00:29:49.395 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120441, failed: 0 00:29:49.395 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30114, failed to submit 90327 00:29:49.395 success 0, unsuccess 30114, failed 0 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:49.395 22:07:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:51.295 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:51.295 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:51.296 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:51.296 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:51.296 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:51.296 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:52.229 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:52.229 00:29:52.229 real 0m16.297s 00:29:52.229 user 0m7.470s 00:29:52.229 sys 0m4.618s 00:29:52.229 22:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.229 22:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:52.229 ************************************ 00:29:52.229 END TEST kernel_target_abort 00:29:52.229 ************************************ 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.229 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.229 rmmod nvme_tcp 00:29:52.229 rmmod nvme_fabrics 00:29:52.487 rmmod nvme_keyring 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3883669 ']' 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3883669 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3883669 ']' 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3883669 00:29:52.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3883669) - No such process 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3883669 is not found' 00:29:52.487 Process with pid 3883669 is not found 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:52.487 22:07:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:54.391 Waiting for block devices as requested 00:29:54.650 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:54.650 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:54.650 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:54.650 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:54.908 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:54.908 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:54.908 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:54.908 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:55.167 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:55.167 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:55.167 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:55.428 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:55.428 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:55.428 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:55.428 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:55.688 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:55.688 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:55.688 22:07:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.219 22:07:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:58.219 00:29:58.219 real 0m45.558s 00:29:58.219 user 1m7.189s 00:29:58.219 sys 0m14.373s 00:29:58.219 22:07:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:58.219 22:07:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:58.219 ************************************ 00:29:58.219 END TEST nvmf_abort_qd_sizes 00:29:58.219 ************************************ 00:29:58.219 22:07:51 -- common/autotest_common.sh@1142 -- # return 0 00:29:58.219 22:07:51 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:58.219 22:07:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:58.219 22:07:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.219 22:07:51 -- common/autotest_common.sh@10 -- # set +x 00:29:58.219 ************************************ 00:29:58.219 START TEST keyring_file 00:29:58.219 ************************************ 00:29:58.219 22:07:51 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:58.219 * Looking for test storage... 00:29:58.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:58.219 22:07:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:58.219 22:07:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.219 22:07:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.220 22:07:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.220 22:07:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.220 22:07:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.220 22:07:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.220 22:07:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.220 22:07:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.220 22:07:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:58.220 22:07:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.o4x75UYK3Z 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o4x75UYK3Z 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.o4x75UYK3Z 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.o4x75UYK3Z 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jLiWKBlfZT 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:58.220 22:07:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jLiWKBlfZT 00:29:58.220 22:07:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jLiWKBlfZT 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jLiWKBlfZT 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=3892219 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3892219 00:29:58.220 22:07:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3892219 ']' 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:58.220 22:07:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:58.220 [2024-07-15 22:07:52.253632] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:29:58.220 [2024-07-15 22:07:52.253681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892219 ] 00:29:58.220 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.220 [2024-07-15 22:07:52.307046] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.220 [2024-07-15 22:07:52.386292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:59.151 22:07:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:59.151 [2024-07-15 22:07:53.053379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.151 null0 00:29:59.151 [2024-07-15 22:07:53.085431] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:59.151 [2024-07-15 22:07:53.085709] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:59.151 [2024-07-15 22:07:53.093437] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.151 22:07:53 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:59.151 [2024-07-15 22:07:53.105471] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:59.151 request: 00:29:59.151 { 00:29:59.151 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.151 "secure_channel": false, 00:29:59.151 "listen_address": { 00:29:59.151 "trtype": "tcp", 00:29:59.151 "traddr": "127.0.0.1", 00:29:59.151 "trsvcid": "4420" 00:29:59.151 }, 00:29:59.151 "method": "nvmf_subsystem_add_listener", 00:29:59.151 "req_id": 1 00:29:59.151 } 00:29:59.151 Got JSON-RPC error response 00:29:59.151 response: 00:29:59.151 { 00:29:59.151 "code": -32602, 00:29:59.151 "message": "Invalid parameters" 00:29:59.151 } 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:59.151 22:07:53 keyring_file -- keyring/file.sh@46 -- # bperfpid=3892446 00:29:59.151 22:07:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3892446 /var/tmp/bperf.sock 00:29:59.151 22:07:53 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3892446 ']' 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:59.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.151 22:07:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:59.151 [2024-07-15 22:07:53.156814] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:29:59.151 [2024-07-15 22:07:53.156855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892446 ] 00:29:59.151 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.151 [2024-07-15 22:07:53.211578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.151 [2024-07-15 22:07:53.290493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.083 22:07:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.083 22:07:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:00.083 22:07:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:00.083 22:07:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:00.083 22:07:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jLiWKBlfZT 00:30:00.083 22:07:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jLiWKBlfZT 00:30:00.083 22:07:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:00.083 22:07:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:00.083 22:07:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.083 22:07:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:00.083 22:07:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.341 22:07:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.o4x75UYK3Z == \/\t\m\p\/\t\m\p\.\o\4\x\7\5\U\Y\K\3\Z ]] 00:30:00.341 22:07:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:00.341 22:07:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:00.341 22:07:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.341 22:07:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.341 22:07:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:00.599 22:07:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jLiWKBlfZT == \/\t\m\p\/\t\m\p\.\j\L\i\W\K\B\l\f\Z\T ]] 00:30:00.599 22:07:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:00.599 22:07:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:00.599 22:07:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.599 22:07:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.599 22:07:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:00.599 22:07:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.599 22:07:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:00.858 22:07:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:00.858 22:07:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:00.858 22:07:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.858 22:07:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:00.858 22:07:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.858 22:07:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.858 22:07:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:00.858 22:07:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:00.858 22:07:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:01.117 [2024-07-15 22:07:55.172883] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:01.117 nvme0n1 00:30:01.117 22:07:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:01.117 22:07:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:01.117 22:07:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.117 22:07:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.117 22:07:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:01.117 22:07:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.376 22:07:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:01.376 22:07:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:01.376 22:07:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:01.376 22:07:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.376 22:07:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.376 22:07:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:01.376 22:07:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.635 22:07:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:01.635 22:07:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:01.635 Running I/O for 1 seconds... 00:30:02.573 00:30:02.573 Latency(us) 00:30:02.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.573 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:02.574 nvme0n1 : 1.01 13143.77 51.34 0.00 0.00 9710.76 5299.87 18692.01 00:30:02.574 =================================================================================================================== 00:30:02.574 Total : 13143.77 51.34 0.00 0.00 9710.76 5299.87 18692.01 00:30:02.574 0 00:30:02.574 22:07:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:02.574 22:07:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:02.832 22:07:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:02.832 22:07:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:02.832 22:07:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:02.832 22:07:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.832 22:07:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:02.832 22:07:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.091 22:07:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:03.091 22:07:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:03.091 22:07:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:03.091 22:07:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:03.091 22:07:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:03.091 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:03.350 [2024-07-15 22:07:57.441453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:03.350 [2024-07-15 22:07:57.441959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7970 (107): Transport endpoint is not connected 00:30:03.350 [2024-07-15 22:07:57.442954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7970 (9): Bad file descriptor 00:30:03.350 [2024-07-15 22:07:57.443954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:03.350 [2024-07-15 22:07:57.443965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:03.350 [2024-07-15 22:07:57.443972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:03.350 request: 00:30:03.350 { 00:30:03.350 "name": "nvme0", 00:30:03.350 "trtype": "tcp", 00:30:03.350 "traddr": "127.0.0.1", 00:30:03.350 "adrfam": "ipv4", 00:30:03.350 "trsvcid": "4420", 00:30:03.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.350 "prchk_reftag": false, 00:30:03.350 "prchk_guard": false, 00:30:03.350 "hdgst": false, 00:30:03.350 "ddgst": false, 00:30:03.350 "psk": "key1", 00:30:03.350 "method": "bdev_nvme_attach_controller", 00:30:03.350 "req_id": 1 00:30:03.350 } 00:30:03.350 Got JSON-RPC error response 00:30:03.350 response: 00:30:03.350 { 00:30:03.350 "code": -5, 00:30:03.350 "message": "Input/output error" 00:30:03.350 } 00:30:03.350 22:07:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:03.350 22:07:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:03.350 22:07:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:03.350 22:07:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:03.350 22:07:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:03.350 22:07:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:03.350 22:07:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.350 22:07:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.350 22:07:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:03.350 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.608 22:07:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:03.608 22:07:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.608 22:07:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:03.608 22:07:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:03.608 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:03.867 22:07:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:03.867 22:07:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:04.131 22:07:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:04.131 22:07:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:04.131 22:07:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.131 22:07:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:04.131 22:07:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.o4x75UYK3Z 00:30:04.131 22:07:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.131 22:07:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.131 22:07:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.451 [2024-07-15 22:07:58.504049] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.o4x75UYK3Z': 0100660 00:30:04.451 [2024-07-15 22:07:58.504077] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:04.451 request: 00:30:04.451 { 00:30:04.451 "name": "key0", 00:30:04.451 "path": "/tmp/tmp.o4x75UYK3Z", 00:30:04.451 "method": "keyring_file_add_key", 00:30:04.451 "req_id": 1 00:30:04.451 } 00:30:04.451 Got JSON-RPC error response 00:30:04.451 response: 00:30:04.451 { 00:30:04.451 "code": -1, 00:30:04.451 "message": "Operation not permitted" 00:30:04.451 } 00:30:04.451 22:07:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:04.451 22:07:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.451 22:07:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.451 22:07:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.451 22:07:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.o4x75UYK3Z 00:30:04.451 22:07:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.451 22:07:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o4x75UYK3Z 00:30:04.710 22:07:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.o4x75UYK3Z 00:30:04.710 22:07:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.710 22:07:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:04.710 22:07:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.710 22:07:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:04.710 22:07:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:04.970 [2024-07-15 22:07:59.041488] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.o4x75UYK3Z': No such file or directory 00:30:04.970 [2024-07-15 22:07:59.041513] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:04.970 [2024-07-15 22:07:59.041531] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:04.970 [2024-07-15 22:07:59.041537] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:04.970 [2024-07-15 22:07:59.041543] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:04.970 request: 00:30:04.970 { 00:30:04.970 "name": "nvme0", 00:30:04.970 "trtype": "tcp", 00:30:04.970 "traddr": "127.0.0.1", 00:30:04.970 "adrfam": "ipv4", 00:30:04.970 "trsvcid": "4420", 00:30:04.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.970 "prchk_reftag": false, 00:30:04.970 "prchk_guard": false, 00:30:04.970 "hdgst": false, 00:30:04.970 "ddgst": false, 00:30:04.970 "psk": "key0", 00:30:04.970 "method": "bdev_nvme_attach_controller", 00:30:04.970 "req_id": 1 00:30:04.970 } 00:30:04.970 Got JSON-RPC error response 00:30:04.970 response: 00:30:04.970 { 00:30:04.970 "code": -19, 00:30:04.970 "message": "No such device" 00:30:04.970 } 00:30:04.970 22:07:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:04.970 22:07:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.970 22:07:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.970 22:07:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.970 22:07:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:04.970 22:07:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:05.228 22:07:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:05.228 22:07:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TyNIopOpT9 00:30:05.228 22:07:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:05.228 22:07:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:05.487 nvme0n1 00:30:05.487 22:07:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:05.487 22:07:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:05.488 22:07:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:05.488 22:07:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:05.488 22:07:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:05.488 22:07:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:05.747 22:07:59 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:05.747 22:07:59 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:05.747 22:07:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:06.005 22:08:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:06.005 22:08:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.005 22:08:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:06.005 22:08:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:06.005 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.265 22:08:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:06.265 22:08:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:06.265 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:06.524 22:08:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:06.524 22:08:00 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:06.524 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:06.524 22:08:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:06.524 22:08:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TyNIopOpT9 00:30:06.524 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TyNIopOpT9 00:30:06.782 22:08:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jLiWKBlfZT 00:30:06.782 22:08:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jLiWKBlfZT 00:30:07.040 22:08:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:07.040 22:08:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:07.298 nvme0n1 00:30:07.298 22:08:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:07.298 22:08:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:07.557 22:08:01 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:07.557 "subsystems": [ 00:30:07.557 { 00:30:07.557 "subsystem": "keyring", 00:30:07.557 "config": [ 00:30:07.557 { 00:30:07.557 "method": "keyring_file_add_key", 00:30:07.557 "params": { 00:30:07.557 "name": "key0", 00:30:07.557 "path": "/tmp/tmp.TyNIopOpT9" 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "keyring_file_add_key", 00:30:07.557 "params": { 00:30:07.557 "name": "key1", 00:30:07.557 "path": "/tmp/tmp.jLiWKBlfZT" 00:30:07.557 } 00:30:07.557 } 00:30:07.557 ] 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "subsystem": "iobuf", 00:30:07.557 "config": [ 00:30:07.557 { 00:30:07.557 "method": "iobuf_set_options", 00:30:07.557 "params": { 00:30:07.557 "small_pool_count": 8192, 00:30:07.557 "large_pool_count": 1024, 00:30:07.557 "small_bufsize": 8192, 00:30:07.557 "large_bufsize": 135168 00:30:07.557 } 00:30:07.557 } 00:30:07.557 ] 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "subsystem": "sock", 00:30:07.557 "config": [ 00:30:07.557 { 00:30:07.557 "method": "sock_set_default_impl", 00:30:07.557 "params": { 00:30:07.557 "impl_name": "posix" 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "sock_impl_set_options", 00:30:07.557 "params": { 00:30:07.557 "impl_name": "ssl", 00:30:07.557 "recv_buf_size": 4096, 00:30:07.557 "send_buf_size": 4096, 00:30:07.557 "enable_recv_pipe": true, 00:30:07.557 "enable_quickack": false, 00:30:07.557 "enable_placement_id": 0, 00:30:07.557 "enable_zerocopy_send_server": true, 00:30:07.557 "enable_zerocopy_send_client": false, 00:30:07.557 "zerocopy_threshold": 0, 00:30:07.557 "tls_version": 0, 00:30:07.557 "enable_ktls": false 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "sock_impl_set_options", 00:30:07.557 "params": { 00:30:07.557 "impl_name": "posix", 00:30:07.557 "recv_buf_size": 2097152, 00:30:07.557 "send_buf_size": 2097152, 00:30:07.557 "enable_recv_pipe": true, 00:30:07.557 "enable_quickack": false, 00:30:07.557 "enable_placement_id": 0, 00:30:07.557 "enable_zerocopy_send_server": true, 00:30:07.557 "enable_zerocopy_send_client": false, 00:30:07.557 "zerocopy_threshold": 0, 00:30:07.557 "tls_version": 0, 00:30:07.557 "enable_ktls": false 00:30:07.557 } 00:30:07.557 } 00:30:07.557 ] 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "subsystem": "vmd", 00:30:07.557 "config": [] 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "subsystem": "accel", 00:30:07.557 "config": [ 00:30:07.557 { 00:30:07.557 "method": "accel_set_options", 00:30:07.557 "params": { 00:30:07.557 "small_cache_size": 128, 00:30:07.557 "large_cache_size": 16, 00:30:07.557 "task_count": 2048, 00:30:07.557 "sequence_count": 2048, 00:30:07.557 "buf_count": 2048 00:30:07.557 } 00:30:07.557 } 00:30:07.557 ] 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "subsystem": "bdev", 00:30:07.557 "config": [ 00:30:07.557 { 00:30:07.557 "method": "bdev_set_options", 00:30:07.557 "params": { 00:30:07.557 "bdev_io_pool_size": 65535, 00:30:07.557 "bdev_io_cache_size": 256, 00:30:07.557 "bdev_auto_examine": true, 00:30:07.557 "iobuf_small_cache_size": 128, 00:30:07.557 "iobuf_large_cache_size": 16 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "bdev_raid_set_options", 00:30:07.557 "params": { 00:30:07.557 "process_window_size_kb": 1024 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "bdev_iscsi_set_options", 00:30:07.557 "params": { 00:30:07.557 "timeout_sec": 30 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "bdev_nvme_set_options", 00:30:07.557 "params": { 00:30:07.557 "action_on_timeout": "none", 00:30:07.557 "timeout_us": 0, 00:30:07.557 "timeout_admin_us": 0, 00:30:07.557 "keep_alive_timeout_ms": 10000, 00:30:07.557 "arbitration_burst": 0, 00:30:07.557 "low_priority_weight": 0, 00:30:07.557 "medium_priority_weight": 0, 00:30:07.557 "high_priority_weight": 0, 00:30:07.557 "nvme_adminq_poll_period_us": 10000, 00:30:07.557 "nvme_ioq_poll_period_us": 0, 00:30:07.557 "io_queue_requests": 512, 00:30:07.557 "delay_cmd_submit": true, 00:30:07.557 "transport_retry_count": 4, 00:30:07.557 "bdev_retry_count": 3, 00:30:07.557 "transport_ack_timeout": 0, 00:30:07.557 "ctrlr_loss_timeout_sec": 0, 00:30:07.557 "reconnect_delay_sec": 0, 00:30:07.557 "fast_io_fail_timeout_sec": 0, 00:30:07.557 "disable_auto_failback": false, 00:30:07.557 "generate_uuids": false, 00:30:07.557 "transport_tos": 0, 00:30:07.557 "nvme_error_stat": false, 00:30:07.557 "rdma_srq_size": 0, 00:30:07.557 "io_path_stat": false, 00:30:07.557 "allow_accel_sequence": false, 00:30:07.557 "rdma_max_cq_size": 0, 00:30:07.557 "rdma_cm_event_timeout_ms": 0, 00:30:07.557 "dhchap_digests": [ 00:30:07.557 "sha256", 00:30:07.557 "sha384", 00:30:07.557 "sha512" 00:30:07.557 ], 00:30:07.557 "dhchap_dhgroups": [ 00:30:07.557 "null", 00:30:07.557 "ffdhe2048", 00:30:07.557 "ffdhe3072", 00:30:07.557 "ffdhe4096", 00:30:07.557 "ffdhe6144", 00:30:07.557 "ffdhe8192" 00:30:07.557 ] 00:30:07.557 } 00:30:07.557 }, 00:30:07.557 { 00:30:07.557 "method": "bdev_nvme_attach_controller", 00:30:07.557 "params": { 00:30:07.557 "name": "nvme0", 00:30:07.557 "trtype": "TCP", 00:30:07.557 "adrfam": "IPv4", 00:30:07.557 "traddr": "127.0.0.1", 00:30:07.557 "trsvcid": "4420", 00:30:07.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.557 "prchk_reftag": false, 00:30:07.557 "prchk_guard": false, 00:30:07.557 "ctrlr_loss_timeout_sec": 0, 00:30:07.557 "reconnect_delay_sec": 0, 00:30:07.557 "fast_io_fail_timeout_sec": 0, 00:30:07.557 "psk": "key0", 00:30:07.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.557 "hdgst": false, 00:30:07.558 "ddgst": false 00:30:07.558 } 00:30:07.558 }, 00:30:07.558 { 00:30:07.558 "method": "bdev_nvme_set_hotplug", 00:30:07.558 "params": { 00:30:07.558 "period_us": 100000, 00:30:07.558 "enable": false 00:30:07.558 } 00:30:07.558 }, 00:30:07.558 { 00:30:07.558 "method": "bdev_wait_for_examine" 00:30:07.558 } 00:30:07.558 ] 00:30:07.558 }, 00:30:07.558 { 00:30:07.558 "subsystem": "nbd", 00:30:07.558 "config": [] 00:30:07.558 } 00:30:07.558 ] 00:30:07.558 }' 00:30:07.558 22:08:01 keyring_file -- keyring/file.sh@114 -- # killprocess 3892446 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3892446 ']' 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3892446 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3892446 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3892446' 00:30:07.558 killing process with pid 3892446 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@967 -- # kill 3892446 00:30:07.558 Received shutdown signal, test time was about 1.000000 seconds 00:30:07.558 00:30:07.558 Latency(us) 00:30:07.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.558 =================================================================================================================== 00:30:07.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.558 22:08:01 keyring_file -- common/autotest_common.sh@972 -- # wait 3892446 00:30:07.816 22:08:01 keyring_file -- keyring/file.sh@117 -- # bperfpid=3893965 00:30:07.816 22:08:01 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3893965 /var/tmp/bperf.sock 00:30:07.816 22:08:01 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3893965 ']' 00:30:07.816 22:08:01 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:07.816 22:08:01 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:07.816 22:08:01 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.816 22:08:01 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:07.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:07.816 22:08:01 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:07.816 "subsystems": [ 00:30:07.816 { 00:30:07.816 "subsystem": "keyring", 00:30:07.816 "config": [ 00:30:07.816 { 00:30:07.816 "method": "keyring_file_add_key", 00:30:07.816 "params": { 00:30:07.816 "name": "key0", 00:30:07.816 "path": "/tmp/tmp.TyNIopOpT9" 00:30:07.816 } 00:30:07.816 }, 00:30:07.816 { 00:30:07.816 "method": "keyring_file_add_key", 00:30:07.816 "params": { 00:30:07.816 "name": "key1", 00:30:07.816 "path": "/tmp/tmp.jLiWKBlfZT" 00:30:07.816 } 00:30:07.816 } 00:30:07.816 ] 00:30:07.816 }, 00:30:07.816 { 00:30:07.816 "subsystem": "iobuf", 00:30:07.816 "config": [ 00:30:07.816 { 00:30:07.816 "method": "iobuf_set_options", 00:30:07.816 "params": { 00:30:07.816 "small_pool_count": 8192, 00:30:07.816 "large_pool_count": 1024, 00:30:07.816 "small_bufsize": 8192, 00:30:07.816 "large_bufsize": 135168 00:30:07.816 } 00:30:07.816 } 00:30:07.816 ] 00:30:07.816 }, 00:30:07.816 { 00:30:07.816 "subsystem": "sock", 00:30:07.816 "config": [ 00:30:07.816 { 00:30:07.816 "method": "sock_set_default_impl", 00:30:07.816 "params": { 00:30:07.816 "impl_name": "posix" 00:30:07.816 } 00:30:07.816 }, 00:30:07.816 { 00:30:07.816 "method": "sock_impl_set_options", 00:30:07.816 "params": { 00:30:07.816 "impl_name": "ssl", 00:30:07.816 "recv_buf_size": 4096, 00:30:07.816 "send_buf_size": 4096, 00:30:07.816 "enable_recv_pipe": true, 00:30:07.816 "enable_quickack": false, 00:30:07.816 "enable_placement_id": 0, 00:30:07.816 "enable_zerocopy_send_server": true, 00:30:07.816 "enable_zerocopy_send_client": false, 00:30:07.816 "zerocopy_threshold": 0, 00:30:07.816 "tls_version": 0, 00:30:07.816 "enable_ktls": false 00:30:07.816 } 00:30:07.816 }, 00:30:07.816 { 00:30:07.816 "method": "sock_impl_set_options", 00:30:07.816 "params": { 00:30:07.816 "impl_name": "posix", 00:30:07.816 "recv_buf_size": 2097152, 00:30:07.816 "send_buf_size": 2097152, 00:30:07.816 "enable_recv_pipe": true, 00:30:07.817 "enable_quickack": false, 00:30:07.817 "enable_placement_id": 0, 00:30:07.817 "enable_zerocopy_send_server": true, 00:30:07.817 "enable_zerocopy_send_client": false, 00:30:07.817 "zerocopy_threshold": 0, 00:30:07.817 "tls_version": 0, 00:30:07.817 "enable_ktls": false 00:30:07.817 } 00:30:07.817 } 00:30:07.817 ] 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "subsystem": "vmd", 00:30:07.817 "config": [] 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "subsystem": "accel", 00:30:07.817 "config": [ 00:30:07.817 { 00:30:07.817 "method": "accel_set_options", 00:30:07.817 "params": { 00:30:07.817 "small_cache_size": 128, 00:30:07.817 "large_cache_size": 16, 00:30:07.817 "task_count": 2048, 00:30:07.817 "sequence_count": 2048, 00:30:07.817 "buf_count": 2048 00:30:07.817 } 00:30:07.817 } 00:30:07.817 ] 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "subsystem": "bdev", 00:30:07.817 "config": [ 00:30:07.817 { 00:30:07.817 "method": "bdev_set_options", 00:30:07.817 "params": { 00:30:07.817 "bdev_io_pool_size": 65535, 00:30:07.817 "bdev_io_cache_size": 256, 00:30:07.817 "bdev_auto_examine": true, 00:30:07.817 "iobuf_small_cache_size": 128, 00:30:07.817 "iobuf_large_cache_size": 16 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_raid_set_options", 00:30:07.817 "params": { 00:30:07.817 "process_window_size_kb": 1024 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_iscsi_set_options", 00:30:07.817 "params": { 00:30:07.817 "timeout_sec": 30 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_nvme_set_options", 00:30:07.817 "params": { 00:30:07.817 "action_on_timeout": "none", 00:30:07.817 "timeout_us": 0, 00:30:07.817 "timeout_admin_us": 0, 00:30:07.817 "keep_alive_timeout_ms": 10000, 00:30:07.817 "arbitration_burst": 0, 00:30:07.817 "low_priority_weight": 0, 00:30:07.817 "medium_priority_weight": 0, 00:30:07.817 "high_priority_weight": 0, 00:30:07.817 "nvme_adminq_poll_period_us": 10000, 00:30:07.817 "nvme_ioq_poll_period_us": 0, 00:30:07.817 "io_queue_requests": 512, 00:30:07.817 "delay_cmd_submit": true, 00:30:07.817 "transport_retry_count": 4, 00:30:07.817 "bdev_retry_count": 3, 00:30:07.817 "transport_ack_timeout": 0, 00:30:07.817 "ctrlr_loss_timeout_sec": 0, 00:30:07.817 "reconnect_delay_sec": 0, 00:30:07.817 "fast_io_fail_timeout_sec": 0, 00:30:07.817 "disable_auto_failback": false, 00:30:07.817 "generate_uuids": false, 00:30:07.817 "transport_tos": 0, 00:30:07.817 "nvme_error_stat": false, 00:30:07.817 "rdma_srq_size": 0, 00:30:07.817 "io_path_stat": false, 00:30:07.817 "allow_accel_sequence": false, 00:30:07.817 "rdma_max_cq_size": 0, 00:30:07.817 "rdma_cm_event_timeout_ms": 0, 00:30:07.817 "dhchap_digests": [ 00:30:07.817 "sha256", 00:30:07.817 "sha384", 00:30:07.817 "sha512" 00:30:07.817 ], 00:30:07.817 "dhchap_dhgroups": [ 00:30:07.817 "null", 00:30:07.817 "ffdhe2048", 00:30:07.817 "ffdhe3072", 00:30:07.817 "ffdhe4096", 00:30:07.817 "ffdhe6144", 00:30:07.817 "ffdhe8192" 00:30:07.817 ] 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_nvme_attach_controller", 00:30:07.817 "params": { 00:30:07.817 "name": "nvme0", 00:30:07.817 "trtype": "TCP", 00:30:07.817 "adrfam": "IPv4", 00:30:07.817 "traddr": "127.0.0.1", 00:30:07.817 "trsvcid": "4420", 00:30:07.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.817 "prchk_reftag": false, 00:30:07.817 "prchk_guard": false, 00:30:07.817 "ctrlr_loss_timeout_sec": 0, 00:30:07.817 "reconnect_delay_sec": 0, 00:30:07.817 "fast_io_fail_timeout_sec": 0, 00:30:07.817 "psk": "key0", 00:30:07.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.817 "hdgst": false, 00:30:07.817 "ddgst": false 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_nvme_set_hotplug", 00:30:07.817 "params": { 00:30:07.817 "period_us": 100000, 00:30:07.817 "enable": false 00:30:07.817 } 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "method": "bdev_wait_for_examine" 00:30:07.817 } 00:30:07.817 ] 00:30:07.817 }, 00:30:07.817 { 00:30:07.817 "subsystem": "nbd", 00:30:07.817 "config": [] 00:30:07.817 } 00:30:07.817 ] 00:30:07.817 }' 00:30:07.817 22:08:01 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.817 22:08:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:07.817 [2024-07-15 22:08:01.860039] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:30:07.817 [2024-07-15 22:08:01.860087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893965 ] 00:30:07.817 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.817 [2024-07-15 22:08:01.914634] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.817 [2024-07-15 22:08:01.993762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.076 [2024-07-15 22:08:02.152177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:08.643 22:08:02 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.643 22:08:02 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:08.643 22:08:02 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:08.643 22:08:02 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:08.643 22:08:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:08.643 22:08:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:08.643 22:08:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:08.902 22:08:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:08.902 22:08:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:08.902 22:08:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:08.902 22:08:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:08.902 22:08:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:08.902 22:08:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:08.902 22:08:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:09.160 22:08:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TyNIopOpT9 /tmp/tmp.jLiWKBlfZT 00:30:09.160 22:08:03 keyring_file -- keyring/file.sh@20 -- # killprocess 3893965 00:30:09.160 22:08:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3893965 ']' 00:30:09.160 22:08:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3893965 00:30:09.160 22:08:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:09.160 22:08:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.160 22:08:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3893965 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3893965' 00:30:09.418 killing process with pid 3893965 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@967 -- # kill 3893965 00:30:09.418 Received shutdown signal, test time was about 1.000000 seconds 00:30:09.418 00:30:09.418 Latency(us) 00:30:09.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.418 =================================================================================================================== 00:30:09.418 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@972 -- # wait 3893965 00:30:09.418 22:08:03 keyring_file -- keyring/file.sh@21 -- # killprocess 3892219 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3892219 ']' 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3892219 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.418 22:08:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3892219 00:30:09.675 22:08:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:09.675 22:08:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:09.675 22:08:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3892219' 00:30:09.675 killing process with pid 3892219 00:30:09.675 22:08:03 keyring_file -- common/autotest_common.sh@967 -- # kill 3892219 00:30:09.675 [2024-07-15 22:08:03.664988] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:09.675 22:08:03 keyring_file -- common/autotest_common.sh@972 -- # wait 3892219 00:30:09.935 00:30:09.935 real 0m11.983s 00:30:09.935 user 0m28.289s 00:30:09.935 sys 0m2.806s 00:30:09.935 22:08:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:09.935 22:08:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:09.935 ************************************ 00:30:09.935 END TEST keyring_file 00:30:09.935 ************************************ 00:30:09.935 22:08:04 -- common/autotest_common.sh@1142 -- # return 0 00:30:09.935 22:08:04 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:09.935 22:08:04 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:09.935 22:08:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:09.935 22:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.935 22:08:04 -- common/autotest_common.sh@10 -- # set +x 00:30:09.935 ************************************ 00:30:09.935 START TEST keyring_linux 00:30:09.935 ************************************ 00:30:09.935 22:08:04 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:09.935 * Looking for test storage... 00:30:09.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.935 22:08:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.935 22:08:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.935 22:08:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.935 22:08:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.935 22:08:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.935 22:08:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.935 22:08:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:09.935 22:08:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:09.935 22:08:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:09.935 22:08:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:09.935 22:08:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:10.195 /tmp/:spdk-test:key0 00:30:10.195 22:08:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:10.195 22:08:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:10.195 22:08:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:10.195 /tmp/:spdk-test:key1 00:30:10.195 22:08:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3894407 00:30:10.195 22:08:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3894407 00:30:10.195 22:08:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3894407 ']' 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:10.195 22:08:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:10.195 [2024-07-15 22:08:04.281233] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:30:10.195 [2024-07-15 22:08:04.281288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894407 ] 00:30:10.195 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.195 [2024-07-15 22:08:04.333234] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.195 [2024-07-15 22:08:04.406272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:11.133 [2024-07-15 22:08:05.090534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.133 null0 00:30:11.133 [2024-07-15 22:08:05.122584] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:11.133 [2024-07-15 22:08:05.122908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:11.133 161717603 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:11.133 1029637126 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3894522 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:11.133 22:08:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3894522 /var/tmp/bperf.sock 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3894522 ']' 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.133 22:08:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:11.133 [2024-07-15 22:08:05.188877] Starting SPDK v24.09-pre git sha1 91f51bb85 / DPDK 24.03.0 initialization... 00:30:11.133 [2024-07-15 22:08:05.188924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894522 ] 00:30:11.133 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.133 [2024-07-15 22:08:05.242453] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.133 [2024-07-15 22:08:05.314349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.069 22:08:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.069 22:08:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:12.069 22:08:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:12.069 22:08:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:12.069 22:08:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:12.069 22:08:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:12.328 22:08:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:12.328 22:08:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:12.328 [2024-07-15 22:08:06.542413] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:12.587 nvme0n1 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:12.587 22:08:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:12.587 22:08:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:12.587 22:08:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.587 22:08:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:12.587 22:08:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@25 -- # sn=161717603 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 161717603 == \1\6\1\7\1\7\6\0\3 ]] 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 161717603 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:12.846 22:08:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:12.846 Running I/O for 1 seconds... 00:30:14.223 00:30:14.223 Latency(us) 00:30:14.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:14.223 nvme0n1 : 1.01 13049.33 50.97 0.00 0.00 9770.52 4587.52 15614.66 00:30:14.223 =================================================================================================================== 00:30:14.223 Total : 13049.33 50.97 0.00 0.00 9770.52 4587.52 15614.66 00:30:14.223 0 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:14.223 22:08:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:14.223 22:08:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:14.223 22:08:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:14.223 22:08:08 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:14.223 22:08:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:14.481 [2024-07-15 22:08:08.614497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:14.481 [2024-07-15 22:08:08.614707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18098e0 (107): Transport endpoint is not connected 00:30:14.481 [2024-07-15 22:08:08.615702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18098e0 (9): Bad file descriptor 00:30:14.481 [2024-07-15 22:08:08.616703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.481 [2024-07-15 22:08:08.616716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:14.481 [2024-07-15 22:08:08.616723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.481 request: 00:30:14.481 { 00:30:14.481 "name": "nvme0", 00:30:14.481 "trtype": "tcp", 00:30:14.481 "traddr": "127.0.0.1", 00:30:14.481 "adrfam": "ipv4", 00:30:14.481 "trsvcid": "4420", 00:30:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:14.481 "prchk_reftag": false, 00:30:14.481 "prchk_guard": false, 00:30:14.481 "hdgst": false, 00:30:14.481 "ddgst": false, 00:30:14.481 "psk": ":spdk-test:key1", 00:30:14.481 "method": "bdev_nvme_attach_controller", 00:30:14.481 "req_id": 1 00:30:14.481 } 00:30:14.481 Got JSON-RPC error response 00:30:14.481 response: 00:30:14.481 { 00:30:14.481 "code": -5, 00:30:14.481 "message": "Input/output error" 00:30:14.481 } 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@33 -- # sn=161717603 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 161717603 00:30:14.481 1 links removed 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@33 -- # sn=1029637126 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1029637126 00:30:14.481 1 links removed 00:30:14.481 22:08:08 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3894522 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3894522 ']' 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3894522 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894522 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894522' 00:30:14.481 killing process with pid 3894522 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@967 -- # kill 3894522 00:30:14.481 Received shutdown signal, test time was about 1.000000 seconds 00:30:14.481 00:30:14.481 Latency(us) 00:30:14.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.481 =================================================================================================================== 00:30:14.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.481 22:08:08 keyring_linux -- common/autotest_common.sh@972 -- # wait 3894522 00:30:14.740 22:08:08 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3894407 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3894407 ']' 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3894407 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894407 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894407' 00:30:14.740 killing process with pid 3894407 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@967 -- # kill 3894407 00:30:14.740 22:08:08 keyring_linux -- common/autotest_common.sh@972 -- # wait 3894407 00:30:14.997 00:30:14.997 real 0m5.195s 00:30:14.997 user 0m9.109s 00:30:14.997 sys 0m1.445s 00:30:14.997 22:08:09 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.997 22:08:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:14.997 ************************************ 00:30:14.997 END TEST keyring_linux 00:30:14.997 ************************************ 00:30:15.256 22:08:09 -- common/autotest_common.sh@1142 -- # return 0 00:30:15.256 22:08:09 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:15.256 22:08:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:15.256 22:08:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:15.256 22:08:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:15.256 22:08:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:15.256 22:08:09 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:15.256 22:08:09 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:15.256 22:08:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:15.256 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:30:15.256 22:08:09 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:15.256 22:08:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:15.256 22:08:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:15.256 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:30:19.447 INFO: APP EXITING 00:30:19.447 INFO: killing all VMs 00:30:19.447 INFO: killing vhost app 00:30:19.447 INFO: EXIT DONE 00:30:22.022 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:22.022 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:22.022 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:23.929 Cleaning 00:30:23.929 Removing: /var/run/dpdk/spdk0/config 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:23.929 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:23.929 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:23.929 Removing: /var/run/dpdk/spdk1/config 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:23.929 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:23.929 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:23.929 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:23.929 Removing: /var/run/dpdk/spdk2/config 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:23.929 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:23.929 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:23.929 Removing: /var/run/dpdk/spdk3/config 00:30:23.929 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:23.929 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:23.929 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:23.930 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:24.190 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:24.190 Removing: /var/run/dpdk/spdk4/config 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:24.190 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:24.190 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:24.190 Removing: /dev/shm/bdev_svc_trace.1 00:30:24.190 Removing: /dev/shm/nvmf_trace.0 00:30:24.190 Removing: /dev/shm/spdk_tgt_trace.pid3510999 00:30:24.190 Removing: /var/run/dpdk/spdk0 00:30:24.190 Removing: /var/run/dpdk/spdk1 00:30:24.190 Removing: /var/run/dpdk/spdk2 00:30:24.190 Removing: /var/run/dpdk/spdk3 00:30:24.190 Removing: /var/run/dpdk/spdk4 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3508745 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3509929 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3510999 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3511631 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3512577 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3512818 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3513787 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3513975 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3514143 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3515666 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3516967 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3517357 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3517699 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3518007 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3518295 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3518545 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3518799 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3519075 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3519827 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3522809 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3523076 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3523394 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3523568 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3524052 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3524070 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3524562 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3524768 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3525004 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3525066 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3525322 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3525550 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3525892 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3526144 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3526436 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3526702 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3526840 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3527007 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3527255 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3527509 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3527754 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3528001 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3528256 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3528507 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3528755 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3529005 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3529252 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3529506 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3529751 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3529998 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3530252 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3530503 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3530749 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3531000 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3531254 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3531509 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3531756 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3532005 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3532291 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3532600 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3536249 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3580125 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3584146 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3594147 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3600050 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3604040 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3604518 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3610506 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3616719 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3616729 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3617546 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3618353 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3619271 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3619850 00:30:24.190 Removing: /var/run/dpdk/spdk_pid3619956 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3620187 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3620203 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3620217 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3621118 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3622033 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3622950 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3623423 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3623476 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3623821 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3624942 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3626099 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3634432 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3634689 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3639060 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3645307 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3647908 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3658090 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3666960 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3668688 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3669599 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3686085 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3689973 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3715041 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3719532 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3721137 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3722974 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3723209 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3723443 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3723690 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3724221 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3726593 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3727659 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3728244 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3730361 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3731078 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3731804 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3735853 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3745571 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3749598 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3755650 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3757065 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3758444 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3762719 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3766955 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3774257 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3774419 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3779315 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3779544 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3779768 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3780232 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3780237 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3784543 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3785079 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3789422 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3792162 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3797553 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3803100 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3811654 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3818855 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3818857 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3837092 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3837689 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3838386 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3839085 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3839953 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3840537 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3841234 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3841924 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3845958 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3846191 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3852244 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3852297 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3854514 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3862197 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3862259 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3867186 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3869542 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3871503 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3872739 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3874739 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3875800 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3884304 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3884766 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3885436 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3887503 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3888060 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3888624 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3892219 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3892446 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3893965 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3894407 00:30:24.450 Removing: /var/run/dpdk/spdk_pid3894522 00:30:24.450 Clean 00:30:24.709 22:08:18 -- common/autotest_common.sh@1451 -- # return 0 00:30:24.709 22:08:18 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:24.709 22:08:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:24.709 22:08:18 -- common/autotest_common.sh@10 -- # set +x 00:30:24.709 22:08:18 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:24.709 22:08:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:24.709 22:08:18 -- common/autotest_common.sh@10 -- # set +x 00:30:24.709 22:08:18 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:24.709 22:08:18 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:24.709 22:08:18 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:24.709 22:08:18 -- spdk/autotest.sh@391 -- # hash lcov 00:30:24.709 22:08:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:24.709 22:08:18 -- spdk/autotest.sh@393 -- # hostname 00:30:24.710 22:08:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:24.969 geninfo: WARNING: invalid characters removed from testname! 00:30:46.930 22:08:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:47.190 22:08:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:49.096 22:08:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:51.000 22:08:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:52.934 22:08:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:54.308 22:08:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:56.209 22:08:50 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:56.209 22:08:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.209 22:08:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:56.209 22:08:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.209 22:08:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.209 22:08:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.209 22:08:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.209 22:08:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.209 22:08:50 -- paths/export.sh@5 -- $ export PATH 00:30:56.209 22:08:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.209 22:08:50 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:56.209 22:08:50 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:56.209 22:08:50 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721074130.XXXXXX 00:30:56.209 22:08:50 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721074130.gKoxgp 00:30:56.209 22:08:50 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:56.209 22:08:50 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:56.209 22:08:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:56.209 22:08:50 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:56.209 22:08:50 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:56.209 22:08:50 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:56.209 22:08:50 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:56.209 22:08:50 -- common/autotest_common.sh@10 -- $ set +x 00:30:56.209 22:08:50 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:56.209 22:08:50 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:56.209 22:08:50 -- pm/common@17 -- $ local monitor 00:30:56.209 22:08:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.209 22:08:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.209 22:08:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.209 22:08:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.209 22:08:50 -- pm/common@25 -- $ sleep 1 00:30:56.209 22:08:50 -- pm/common@21 -- $ date +%s 00:30:56.209 22:08:50 -- pm/common@21 -- $ date +%s 00:30:56.209 22:08:50 -- pm/common@21 -- $ date +%s 00:30:56.209 22:08:50 -- pm/common@21 -- $ date +%s 00:30:56.209 22:08:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721074130 00:30:56.209 22:08:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721074130 00:30:56.209 22:08:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721074130 00:30:56.209 22:08:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721074130 00:30:56.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721074130_collect-vmstat.pm.log 00:30:56.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721074130_collect-cpu-load.pm.log 00:30:56.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721074130_collect-cpu-temp.pm.log 00:30:56.209 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721074130_collect-bmc-pm.bmc.pm.log 00:30:57.591 22:08:51 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:57.591 22:08:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:57.591 22:08:51 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:57.591 22:08:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:57.591 22:08:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:57.591 22:08:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:57.591 22:08:51 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:57.591 22:08:51 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:57.591 22:08:51 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:57.591 22:08:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:57.591 22:08:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:57.591 22:08:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:57.591 22:08:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:57.591 22:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:57.591 22:08:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:57.591 22:08:51 -- pm/common@44 -- $ pid=3904464 00:30:57.591 22:08:51 -- pm/common@50 -- $ kill -TERM 3904464 00:30:57.591 22:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:57.591 22:08:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:57.591 22:08:51 -- pm/common@44 -- $ pid=3904465 00:30:57.591 22:08:51 -- pm/common@50 -- $ kill -TERM 3904465 00:30:57.591 22:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:57.592 22:08:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:57.592 22:08:51 -- pm/common@44 -- $ pid=3904467 00:30:57.592 22:08:51 -- pm/common@50 -- $ kill -TERM 3904467 00:30:57.592 22:08:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:57.592 22:08:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:57.592 22:08:51 -- pm/common@44 -- $ pid=3904493 00:30:57.592 22:08:51 -- pm/common@50 -- $ sudo -E kill -TERM 3904493 00:30:57.592 + [[ -n 3404929 ]] 00:30:57.592 + sudo kill 3404929 00:30:57.689 [Pipeline] } 00:30:57.707 [Pipeline] // stage 00:30:57.712 [Pipeline] } 00:30:57.730 [Pipeline] // timeout 00:30:57.736 [Pipeline] } 00:30:57.755 [Pipeline] // catchError 00:30:57.761 [Pipeline] } 00:30:57.777 [Pipeline] // wrap 00:30:57.783 [Pipeline] } 00:30:57.799 [Pipeline] // catchError 00:30:57.809 [Pipeline] stage 00:30:57.811 [Pipeline] { (Epilogue) 00:30:57.826 [Pipeline] catchError 00:30:57.828 [Pipeline] { 00:30:57.845 [Pipeline] echo 00:30:57.848 Cleanup processes 00:30:57.855 [Pipeline] sh 00:30:58.143 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:58.143 3904596 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:58.143 3904866 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:58.159 [Pipeline] sh 00:30:58.443 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:58.443 ++ grep -v 'sudo pgrep' 00:30:58.443 ++ awk '{print $1}' 00:30:58.443 + sudo kill -9 3904596 00:30:58.457 [Pipeline] sh 00:30:58.741 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:08.736 [Pipeline] sh 00:31:09.021 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:09.021 Artifacts sizes are good 00:31:09.038 [Pipeline] archiveArtifacts 00:31:09.046 Archiving artifacts 00:31:09.236 [Pipeline] sh 00:31:09.513 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:09.528 [Pipeline] cleanWs 00:31:09.539 [WS-CLEANUP] Deleting project workspace... 00:31:09.539 [WS-CLEANUP] Deferred wipeout is used... 00:31:09.546 [WS-CLEANUP] done 00:31:09.547 [Pipeline] } 00:31:09.563 [Pipeline] // catchError 00:31:09.577 [Pipeline] sh 00:31:09.859 + logger -p user.info -t JENKINS-CI 00:31:09.870 [Pipeline] } 00:31:09.890 [Pipeline] // stage 00:31:09.897 [Pipeline] } 00:31:09.918 [Pipeline] // node 00:31:09.922 [Pipeline] End of Pipeline 00:31:09.959 Finished: SUCCESS